00:00:00.000  Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2376
00:00:00.000  originally caused by:
00:00:00.000   Started by upstream project "nightly-trigger" build number 3641
00:00:00.000   originally caused by:
00:00:00.000    Started by timer
00:00:00.073  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.073  The recommended git tool is: git
00:00:00.074  using credential 00000000-0000-0000-0000-000000000002
00:00:00.075   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu24-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.091  Fetching changes from the remote Git repository
00:00:00.093   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.118  Using shallow fetch with depth 1
00:00:00.118  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.118   > git --version # timeout=10
00:00:00.143   > git --version # 'git version 2.39.2'
00:00:00.143  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.170  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.170   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:03.372   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:03.384   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:03.396  Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD)
00:00:03.396   > git config core.sparsecheckout # timeout=10
00:00:03.407   > git read-tree -mu HEAD # timeout=10
00:00:03.422   > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5
00:00:03.440  Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd"
00:00:03.440   > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10
00:00:03.522  [Pipeline] Start of Pipeline
00:00:03.533  [Pipeline] library
00:00:03.534  Loading library shm_lib@master
00:00:03.535  Library shm_lib@master is cached. Copying from home.
00:00:03.547  [Pipeline] node
00:00:03.560  Running on VM-host-SM9 in /var/jenkins/workspace/ubuntu24-vg-autotest
00:00:03.561  [Pipeline] {
00:00:03.568  [Pipeline] catchError
00:00:03.569  [Pipeline] {
00:00:03.577  [Pipeline] wrap
00:00:03.584  [Pipeline] {
00:00:03.590  [Pipeline] stage
00:00:03.592  [Pipeline] { (Prologue)
00:00:03.606  [Pipeline] echo
00:00:03.607  Node: VM-host-SM9
00:00:03.613  [Pipeline] cleanWs
00:00:03.624  [WS-CLEANUP] Deleting project workspace...
00:00:03.624  [WS-CLEANUP] Deferred wipeout is used...
00:00:03.630  [WS-CLEANUP] done
00:00:03.813  [Pipeline] setCustomBuildProperty
00:00:03.878  [Pipeline] httpRequest
00:00:04.222  [Pipeline] echo
00:00:04.224  Sorcerer 10.211.164.20 is alive
00:00:04.233  [Pipeline] retry
00:00:04.235  [Pipeline] {
00:00:04.252  [Pipeline] httpRequest
00:00:04.256  HttpMethod: GET
00:00:04.256  URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:04.257  Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:04.258  Response Code: HTTP/1.1 200 OK
00:00:04.259  Success: Status code 200 is in the accepted range: 200,404
00:00:04.259  Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:04.548  [Pipeline] }
00:00:04.564  [Pipeline] // retry
00:00:04.571  [Pipeline] sh
00:00:04.851  + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz
00:00:04.867  [Pipeline] httpRequest
00:00:05.191  [Pipeline] echo
00:00:05.192  Sorcerer 10.211.164.20 is alive
00:00:05.202  [Pipeline] retry
00:00:05.204  [Pipeline] {
00:00:05.217  [Pipeline] httpRequest
00:00:05.221  HttpMethod: GET
00:00:05.222  URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:05.222  Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:05.223  Response Code: HTTP/1.1 200 OK
00:00:05.223  Success: Status code 200 is in the accepted range: 200,404
00:00:05.224  Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:28.357  [Pipeline] }
00:00:28.377  [Pipeline] // retry
00:00:28.394  [Pipeline] sh
00:00:28.684  + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz
00:00:31.226  [Pipeline] sh
00:00:31.510  + git -C spdk log --oneline -n5
00:00:31.510  83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process
00:00:31.510  0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort()
00:00:31.510  4bcab9fb9 correct kick for CQ full case
00:00:31.510  8531656d3 test/nvmf: Interrupt test for local pcie nvme device
00:00:31.510  318515b44 nvme/perf: interrupt mode support for pcie controller
00:00:31.530  [Pipeline] withCredentials
00:00:31.543   > git --version # timeout=10
00:00:31.556   > git --version # 'git version 2.39.2'
00:00:31.573  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:00:31.575  [Pipeline] {
00:00:31.586  [Pipeline] retry
00:00:31.588  [Pipeline] {
00:00:31.606  [Pipeline] sh
00:00:31.887  + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4
00:00:31.898  [Pipeline] }
00:00:31.916  [Pipeline] // retry
00:00:31.922  [Pipeline] }
00:00:31.937  [Pipeline] // withCredentials
00:00:31.947  [Pipeline] httpRequest
00:00:32.335  [Pipeline] echo
00:00:32.337  Sorcerer 10.211.164.20 is alive
00:00:32.347  [Pipeline] retry
00:00:32.349  [Pipeline] {
00:00:32.364  [Pipeline] httpRequest
00:00:32.368  HttpMethod: GET
00:00:32.369  URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:00:32.369  Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:00:32.381  Response Code: HTTP/1.1 200 OK
00:00:32.382  Success: Status code 200 is in the accepted range: 200,404
00:00:32.383  Saving response body to /var/jenkins/workspace/ubuntu24-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:01:01.927  [Pipeline] }
00:01:01.946  [Pipeline] // retry
00:01:01.954  [Pipeline] sh
00:01:02.237  + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz
00:01:03.630  [Pipeline] sh
00:01:03.912  + git -C dpdk log --oneline -n5
00:01:03.912  caf0f5d395 version: 22.11.4
00:01:03.912  7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt"
00:01:03.912  dc9c799c7d vhost: fix missing spinlock unlock
00:01:03.912  4307659a90 net/mlx5: fix LACP redirection in Rx domain
00:01:03.912  6ef77f2a5e net/gve: fix RX buffer size alignment
00:01:03.931  [Pipeline] writeFile
00:01:03.946  [Pipeline] sh
00:01:04.229  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:04.242  [Pipeline] sh
00:01:04.524  + cat autorun-spdk.conf
00:01:04.524  SPDK_TEST_UNITTEST=1
00:01:04.524  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:04.524  SPDK_TEST_NVME=1
00:01:04.524  SPDK_TEST_BLOCKDEV=1
00:01:04.524  SPDK_RUN_ASAN=1
00:01:04.524  SPDK_RUN_UBSAN=1
00:01:04.524  SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:04.524  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:04.524  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:04.531  RUN_NIGHTLY=1
00:01:04.533  [Pipeline] }
00:01:04.547  [Pipeline] // stage
00:01:04.561  [Pipeline] stage
00:01:04.563  [Pipeline] { (Run VM)
00:01:04.576  [Pipeline] sh
00:01:04.860  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:04.860  + echo 'Start stage prepare_nvme.sh'
00:01:04.860  Start stage prepare_nvme.sh
00:01:04.860  + [[ -n 1 ]]
00:01:04.860  + disk_prefix=ex1
00:01:04.860  + [[ -n /var/jenkins/workspace/ubuntu24-vg-autotest ]]
00:01:04.860  + [[ -e /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf ]]
00:01:04.860  + source /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf
00:01:04.860  ++ SPDK_TEST_UNITTEST=1
00:01:04.860  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:04.860  ++ SPDK_TEST_NVME=1
00:01:04.860  ++ SPDK_TEST_BLOCKDEV=1
00:01:04.860  ++ SPDK_RUN_ASAN=1
00:01:04.860  ++ SPDK_RUN_UBSAN=1
00:01:04.860  ++ SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:04.860  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:04.860  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:04.860  ++ RUN_NIGHTLY=1
00:01:04.860  + cd /var/jenkins/workspace/ubuntu24-vg-autotest
00:01:04.860  + nvme_files=()
00:01:04.860  + declare -A nvme_files
00:01:04.860  + backend_dir=/var/lib/libvirt/images/backends
00:01:04.860  + nvme_files['nvme.img']=5G
00:01:04.860  + nvme_files['nvme-cmb.img']=5G
00:01:04.860  + nvme_files['nvme-multi0.img']=4G
00:01:04.860  + nvme_files['nvme-multi1.img']=4G
00:01:04.860  + nvme_files['nvme-multi2.img']=4G
00:01:04.860  + nvme_files['nvme-openstack.img']=8G
00:01:04.860  + nvme_files['nvme-zns.img']=5G
00:01:04.860  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:04.860  + ((  SPDK_TEST_FTL == 1  ))
00:01:04.860  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:04.860  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:04.860  + for nvme in "${!nvme_files[@]}"
00:01:04.860  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G
00:01:04.860  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:04.860  + for nvme in "${!nvme_files[@]}"
00:01:04.860  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G
00:01:04.860  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:04.860  + for nvme in "${!nvme_files[@]}"
00:01:04.860  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G
00:01:04.860  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:04.860  + for nvme in "${!nvme_files[@]}"
00:01:04.860  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G
00:01:04.861  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:04.861  + for nvme in "${!nvme_files[@]}"
00:01:04.861  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G
00:01:04.861  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:04.861  + for nvme in "${!nvme_files[@]}"
00:01:04.861  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G
00:01:05.120  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:05.120  + for nvme in "${!nvme_files[@]}"
00:01:05.120  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G
00:01:05.120  Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:05.120  ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu
00:01:05.120  + echo 'End stage prepare_nvme.sh'
00:01:05.120  End stage prepare_nvme.sh
00:01:05.133  [Pipeline] sh
00:01:05.417  + DISTRO=ubuntu2404 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:05.417  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2404
00:01:05.677  
00:01:05.677  DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant
00:01:05.677  SPDK_DIR=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk
00:01:05.677  VAGRANT_TARGET=/var/jenkins/workspace/ubuntu24-vg-autotest
00:01:05.677  HELP=0
00:01:05.677  DRY_RUN=0
00:01:05.677  NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,
00:01:05.677  NVME_DISKS_TYPE=nvme,
00:01:05.677  NVME_AUTO_CREATE=0
00:01:05.677  NVME_DISKS_NAMESPACES=,
00:01:05.677  NVME_CMB=,
00:01:05.677  NVME_PMR=,
00:01:05.677  NVME_ZNS=,
00:01:05.677  NVME_MS=,
00:01:05.677  NVME_FDP=,
00:01:05.677  SPDK_VAGRANT_DISTRO=ubuntu2404
00:01:05.677  SPDK_VAGRANT_VMCPU=10
00:01:05.677  SPDK_VAGRANT_VMRAM=12288
00:01:05.677  SPDK_VAGRANT_PROVIDER=libvirt
00:01:05.677  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:01:05.677  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:05.677  SPDK_OPENSTACK_NETWORK=0
00:01:05.677  VAGRANT_PACKAGE_BOX=0
00:01:05.677  VAGRANTFILE=/var/jenkins/workspace/ubuntu24-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:05.677  FORCE_DISTRO=true
00:01:05.677  VAGRANT_BOX_VERSION=
00:01:05.677  EXTRA_VAGRANTFILES=
00:01:05.677  NIC_MODEL=e1000
00:01:05.677  
00:01:05.677  mkdir: created directory '/var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt'
00:01:05.677  /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt /var/jenkins/workspace/ubuntu24-vg-autotest
00:01:08.968  Bringing machine 'default' up with 'libvirt' provider...
00:01:09.227  ==> default: Creating image (snapshot of base box volume).
00:01:09.487  ==> default: Creating domain with the following settings...
00:01:09.487  ==> default:  -- Name:              ubuntu2404-24.04-1720510786-2314_default_1731908790_d1b60e79eb68ea485ad3
00:01:09.487  ==> default:  -- Domain type:       kvm
00:01:09.487  ==> default:  -- Cpus:              10
00:01:09.487  ==> default:  -- Feature:           acpi
00:01:09.487  ==> default:  -- Feature:           apic
00:01:09.487  ==> default:  -- Feature:           pae
00:01:09.487  ==> default:  -- Memory:            12288M
00:01:09.487  ==> default:  -- Memory Backing:    hugepages: 
00:01:09.487  ==> default:  -- Management MAC:    
00:01:09.487  ==> default:  -- Loader:            
00:01:09.487  ==> default:  -- Nvram:             
00:01:09.487  ==> default:  -- Base box:          spdk/ubuntu2404
00:01:09.487  ==> default:  -- Storage pool:      default
00:01:09.487  ==> default:  -- Image:             /var/lib/libvirt/images/ubuntu2404-24.04-1720510786-2314_default_1731908790_d1b60e79eb68ea485ad3.img (20G)
00:01:09.487  ==> default:  -- Volume Cache:      default
00:01:09.487  ==> default:  -- Kernel:            
00:01:09.487  ==> default:  -- Initrd:            
00:01:09.487  ==> default:  -- Graphics Type:     vnc
00:01:09.487  ==> default:  -- Graphics Port:     -1
00:01:09.487  ==> default:  -- Graphics IP:       127.0.0.1
00:01:09.487  ==> default:  -- Graphics Password: Not defined
00:01:09.487  ==> default:  -- Video Type:        cirrus
00:01:09.487  ==> default:  -- Video VRAM:        9216
00:01:09.487  ==> default:  -- Sound Type:	
00:01:09.487  ==> default:  -- Keymap:            en-us
00:01:09.487  ==> default:  -- TPM Path:          
00:01:09.487  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:09.487  ==> default:  -- Command line args: 
00:01:09.487  ==> default:     -> value=-device, 
00:01:09.487  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:01:09.487  ==> default:     -> value=-drive, 
00:01:09.487  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 
00:01:09.487  ==> default:     -> value=-device, 
00:01:09.487  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:09.487  ==> default: Creating shared folders metadata...
00:01:09.487  ==> default: Starting domain.
00:01:10.868  ==> default: Waiting for domain to get an IP address...
00:01:20.871  ==> default: Waiting for SSH to become available...
00:01:21.808  ==> default: Configuring and enabling network interfaces...
00:01:27.082  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:01:31.267  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk
00:01:36.538  ==> default: Mounting SSHFS shared folder...
00:01:37.915  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output => /home/vagrant/spdk_repo/output
00:01:37.915  ==> default: Checking Mount..
00:01:38.483  ==> default: Folder Successfully Mounted!
00:01:38.483  ==> default: Running provisioner: file...
00:01:38.742      default: ~/.gitconfig => .gitconfig
00:01:39.309  
00:01:39.309    SUCCESS!
00:01:39.309  
00:01:39.309    cd to /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt and type "vagrant ssh" to use.
00:01:39.309    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:01:39.309    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt" to destroy all trace of vm.
00:01:39.309  
00:01:39.317  [Pipeline] }
00:01:39.331  [Pipeline] // stage
00:01:39.341  [Pipeline] dir
00:01:39.341  Running in /var/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt
00:01:39.343  [Pipeline] {
00:01:39.354  [Pipeline] catchError
00:01:39.356  [Pipeline] {
00:01:39.367  [Pipeline] sh
00:01:39.647  + vagrant ssh-config --host vagrant
00:01:39.647  + sed -ne /^Host/,$p
00:01:39.647  + tee ssh_conf
00:01:43.009  Host vagrant
00:01:43.009    HostName 192.168.121.253
00:01:43.009    User vagrant
00:01:43.009    Port 22
00:01:43.009    UserKnownHostsFile /dev/null
00:01:43.009    StrictHostKeyChecking no
00:01:43.009    PasswordAuthentication no
00:01:43.009    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2404/24.04-1720510786-2314/libvirt/ubuntu2404
00:01:43.009    IdentitiesOnly yes
00:01:43.009    LogLevel FATAL
00:01:43.009    ForwardAgent yes
00:01:43.009    ForwardX11 yes
00:01:43.009  
00:01:43.023  [Pipeline] withEnv
00:01:43.026  [Pipeline] {
00:01:43.039  [Pipeline] sh
00:01:43.319  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:01:43.319  		source /etc/os-release
00:01:43.319  		[[ -e /image.version ]] && img=$(< /image.version)
00:01:43.319  		# Minimal, systemd-like check.
00:01:43.319  		if [[ -e /.dockerenv ]]; then
00:01:43.319  			# Clear garbage from the node's name:
00:01:43.319  			#  agt-er_autotest_547-896 -> autotest_547-896
00:01:43.319  			#  $HOSTNAME is the actual container id
00:01:43.319  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:01:43.319  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:01:43.319  				# We can assume this is a mount from a host where container is running,
00:01:43.319  				# so fetch its hostname to easily identify the target swarm worker.
00:01:43.319  				container="$(< /etc/hostname) ($agent)"
00:01:43.319  			else
00:01:43.319  				# Fallback
00:01:43.319  				container=$agent
00:01:43.319  			fi
00:01:43.319  		fi
00:01:43.319  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:01:43.319  
00:01:43.589  [Pipeline] }
00:01:43.605  [Pipeline] // withEnv
00:01:43.613  [Pipeline] setCustomBuildProperty
00:01:43.627  [Pipeline] stage
00:01:43.629  [Pipeline] { (Tests)
00:01:43.646  [Pipeline] sh
00:01:43.927  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:01:44.199  [Pipeline] sh
00:01:44.479  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:01:44.753  [Pipeline] timeout
00:01:44.753  Timeout set to expire in 1 hr 30 min
00:01:44.756  [Pipeline] {
00:01:44.771  [Pipeline] sh
00:01:45.051  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:01:45.618  HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process
00:01:45.629  [Pipeline] sh
00:01:45.907  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:01:46.181  [Pipeline] sh
00:01:46.461  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu24-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:01:46.732  [Pipeline] sh
00:01:47.006  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu24-vg-autotest ./autoruner.sh spdk_repo
00:01:47.266  ++ readlink -f spdk_repo
00:01:47.266  + DIR_ROOT=/home/vagrant/spdk_repo
00:01:47.266  + [[ -n /home/vagrant/spdk_repo ]]
00:01:47.266  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:01:47.266  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:01:47.266  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:01:47.266  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:01:47.266  + [[ -d /home/vagrant/spdk_repo/output ]]
00:01:47.266  + [[ ubuntu24-vg-autotest == pkgdep-* ]]
00:01:47.266  + cd /home/vagrant/spdk_repo
00:01:47.266  + source /etc/os-release
00:01:47.266  ++ PRETTY_NAME='Ubuntu 24.04 LTS'
00:01:47.266  ++ NAME=Ubuntu
00:01:47.266  ++ VERSION_ID=24.04
00:01:47.266  ++ VERSION='24.04 LTS (Noble Numbat)'
00:01:47.266  ++ VERSION_CODENAME=noble
00:01:47.266  ++ ID=ubuntu
00:01:47.266  ++ ID_LIKE=debian
00:01:47.266  ++ HOME_URL=https://www.ubuntu.com/
00:01:47.266  ++ SUPPORT_URL=https://help.ubuntu.com/
00:01:47.266  ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
00:01:47.266  ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
00:01:47.266  ++ UBUNTU_CODENAME=noble
00:01:47.266  ++ LOGO=ubuntu-logo
00:01:47.266  + uname -a
00:01:47.266  Linux ubuntu2404-cloud-1720510786-2314 6.8.0-36-generic #36-Ubuntu SMP PREEMPT_DYNAMIC Mon Jun 10 10:49:14 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
00:01:47.266  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:01:47.524  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:01:47.524  Hugepages
00:01:47.524  node     hugesize     free /  total
00:01:47.524  node0   1048576kB        0 /      0
00:01:47.524  node0      2048kB        0 /      0
00:01:47.524  
00:01:47.524  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:01:47.524  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:01:47.524  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:01:47.524  + rm -f /tmp/spdk-ld-path
00:01:47.524  + source autorun-spdk.conf
00:01:47.524  ++ SPDK_TEST_UNITTEST=1
00:01:47.524  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:47.524  ++ SPDK_TEST_NVME=1
00:01:47.524  ++ SPDK_TEST_BLOCKDEV=1
00:01:47.524  ++ SPDK_RUN_ASAN=1
00:01:47.524  ++ SPDK_RUN_UBSAN=1
00:01:47.524  ++ SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:47.524  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:47.524  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:47.524  ++ RUN_NIGHTLY=1
00:01:47.524  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:01:47.524  + [[ -n '' ]]
00:01:47.524  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:01:47.524  + for M in /var/spdk/build-*-manifest.txt
00:01:47.524  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:01:47.524  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:01:47.525  + for M in /var/spdk/build-*-manifest.txt
00:01:47.525  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:01:47.525  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:01:47.525  ++ uname
00:01:47.525  + [[ Linux == \L\i\n\u\x ]]
00:01:47.525  + sudo dmesg -T
00:01:47.783  + sudo dmesg --clear
00:01:47.783  + dmesg_pid=2553
00:01:47.783  + [[ Ubuntu == FreeBSD ]]
00:01:47.783  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:47.783  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:01:47.783  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:01:47.783  + sudo dmesg -Tw
00:01:47.783  + [[ -x /usr/src/fio-static/fio ]]
00:01:47.783  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:01:47.783  + [[ ! -v VFIO_QEMU_BIN ]]
00:01:47.783  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:01:47.783  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:01:47.783  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:47.784  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:01:47.784  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:01:47.784  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:47.784    05:47:07  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:01:47.784   05:47:07  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_TEST_UNITTEST=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_BLOCKDEV=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:47.784    05:47:07  -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1
00:01:47.784   05:47:07  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:01:47.784   05:47:07  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:01:47.784     05:47:07  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:01:47.784    05:47:07  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:01:47.784     05:47:07  -- scripts/common.sh@15 -- $ shopt -s extglob
00:01:47.784     05:47:07  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:01:47.784     05:47:07  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:01:47.784     05:47:07  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:01:47.784      05:47:07  -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:47.784      05:47:07  -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:47.784      05:47:07  -- paths/export.sh@4 -- $ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:47.784      05:47:07  -- paths/export.sh@5 -- $ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:47.784      05:47:07  -- paths/export.sh@6 -- $ export PATH
00:01:47.784      05:47:07  -- paths/export.sh@7 -- $ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:01:47.784    05:47:07  -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:01:47.784      05:47:07  -- common/autobuild_common.sh@486 -- $ date +%s
00:01:47.784     05:47:07  -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731908827.XXXXXX
00:01:47.784    05:47:07  -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731908827.39kCYw
00:01:47.784    05:47:07  -- common/autobuild_common.sh@488 -- $ [[ -n '' ]]
00:01:47.784    05:47:07  -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']'
00:01:47.784     05:47:07  -- common/autobuild_common.sh@493 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:01:47.784    05:47:07  -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:01:47.784    05:47:07  -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:01:47.784    05:47:07  -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:01:47.784     05:47:07  -- common/autobuild_common.sh@502 -- $ get_config_params
00:01:47.784     05:47:07  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:01:47.784     05:47:07  -- common/autotest_common.sh@10 -- $ set +x
00:01:47.784    05:47:07  -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:01:47.784    05:47:07  -- common/autobuild_common.sh@504 -- $ start_monitor_resources
00:01:47.784    05:47:07  -- pm/common@17 -- $ local monitor
00:01:47.784    05:47:07  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:47.784    05:47:07  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:01:47.784    05:47:07  -- pm/common@25 -- $ sleep 1
00:01:47.784     05:47:07  -- pm/common@21 -- $ date +%s
00:01:47.784     05:47:07  -- pm/common@21 -- $ date +%s
00:01:47.784    05:47:07  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731908827
00:01:47.784    05:47:07  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731908827
00:01:47.784  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731908827_collect-cpu-load.pm.log
00:01:47.784  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731908827_collect-vmstat.pm.log
00:01:49.160    05:47:08  -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT
00:01:49.160   05:47:08  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:01:49.160   05:47:08  -- spdk/autobuild.sh@12 -- $ umask 022
00:01:49.160   05:47:08  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:01:49.160   05:47:08  -- spdk/autobuild.sh@16 -- $ date -u
00:01:49.160  Mon Nov 18 05:47:08 UTC 2024
00:01:49.160   05:47:08  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:01:49.160  v25.01-pre-189-g83e8405e4
00:01:49.160   05:47:08  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:01:49.160   05:47:08  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:01:49.160   05:47:08  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:49.160   05:47:08  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:49.160   05:47:08  -- common/autotest_common.sh@10 -- $ set +x
00:01:49.160  ************************************
00:01:49.160  START TEST asan
00:01:49.160  ************************************
00:01:49.160  using asan
00:01:49.160   05:47:08 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:01:49.160  
00:01:49.160  real	0m0.000s
00:01:49.160  user	0m0.000s
00:01:49.160  sys	0m0.000s
00:01:49.160   05:47:08 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:49.160   05:47:08 asan -- common/autotest_common.sh@10 -- $ set +x
00:01:49.160  ************************************
00:01:49.160  END TEST asan
00:01:49.160  ************************************
00:01:49.160   05:47:08  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:01:49.160   05:47:08  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:01:49.160   05:47:08  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:01:49.160   05:47:08  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:49.160   05:47:08  -- common/autotest_common.sh@10 -- $ set +x
00:01:49.160  ************************************
00:01:49.160  START TEST ubsan
00:01:49.160  ************************************
00:01:49.160  using ubsan
00:01:49.160   05:47:08 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:01:49.160  
00:01:49.160  real	0m0.000s
00:01:49.160  user	0m0.000s
00:01:49.160  sys	0m0.000s
00:01:49.160   05:47:08 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:01:49.160   05:47:08 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:01:49.160  ************************************
00:01:49.160  END TEST ubsan
00:01:49.160  ************************************
00:01:49.160   05:47:08  -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']'
00:01:49.160   05:47:08  -- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:01:49.160   05:47:08  -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk
00:01:49.160   05:47:08  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:01:49.160   05:47:08  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:01:49.160   05:47:08  -- common/autotest_common.sh@10 -- $ set +x
00:01:49.160  ************************************
00:01:49.160  START TEST build_native_dpdk
00:01:49.160  ************************************
00:01:49.160   05:47:08 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:01:49.160    05:47:08 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build
00:01:49.160    05:47:08 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5
00:01:49.160  caf0f5d395 version: 22.11.4
00:01:49.160  7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt"
00:01:49.160  dc9c799c7d vhost: fix missing spinlock unlock
00:01:49.160  4307659a90 net/mlx5: fix LACP redirection in Rx domain
00:01:49.160  6ef77f2a5e net/gve: fix RX buffer size alignment
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base")
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]]
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk
00:01:49.160    05:47:08 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']'
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]]
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:01:49.160    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1
00:01:49.160  patching file config/rte_config.h
00:01:49.160  Hunk #1 succeeded at 60 (offset 1 line).
00:01:49.160   05:47:08 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0
00:01:49.160   05:47:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]]
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@368 -- $ return 0
00:01:49.161   05:47:08 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1
00:01:49.161  patching file lib/pcapng/rte_pcapng.c
00:01:49.161  Hunk #1 succeeded at 110 (offset -18 lines).
00:01:49.161   05:47:08 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]]
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:01:49.161    05:47:08 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:01:49.161   05:47:08 build_native_dpdk -- scripts/common.sh@368 -- $ return 1
00:01:49.161   05:47:08 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false
00:01:49.161    05:47:08 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s
00:01:49.161   05:47:08 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']'
00:01:49.161    05:47:08 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base
00:01:49.161   05:47:08 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:54.426  The Meson build system
00:01:54.426  Version: 1.4.1
00:01:54.426  Source dir: /home/vagrant/spdk_repo/dpdk
00:01:54.426  Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp
00:01:54.426  Build type: native build
00:01:54.426  Program cat found: YES (/usr/bin/cat)
00:01:54.426  Project name: DPDK
00:01:54.426  Project version: 22.11.4
00:01:54.426  C compiler for the host machine: gcc (gcc 13.2.0 "gcc (Ubuntu 13.2.0-23ubuntu4) 13.2.0")
00:01:54.426  C linker for the host machine: gcc ld.bfd 2.42
00:01:54.426  Host machine cpu family: x86_64
00:01:54.426  Host machine cpu: x86_64
00:01:54.426  Message: ## Building in Developer Mode ##
00:01:54.426  Program pkg-config found: YES (/usr/bin/pkg-config)
00:01:54.426  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh)
00:01:54.426  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh)
00:01:54.426  Program objdump found: YES (/usr/bin/objdump)
00:01:54.426  Program python3 found: YES (/var/spdk/dependencies/pip/bin/python3)
00:01:54.426  Program cat found: YES (/usr/bin/cat)
00:01:54.426  config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:01:54.426  Checking for size of "void *" : 8 
00:01:54.426  Checking for size of "void *" : 8 (cached)
00:01:54.426  Library m found: YES
00:01:54.426  Library numa found: YES
00:01:54.426  Has header "numaif.h" : YES 
00:01:54.426  Library fdt found: NO
00:01:54.426  Library execinfo found: NO
00:01:54.426  Has header "execinfo.h" : YES 
00:01:54.426  Found pkg-config: YES (/usr/bin/pkg-config) 1.8.1
00:01:54.426  Run-time dependency libarchive found: NO (tried pkgconfig)
00:01:54.426  Run-time dependency libbsd found: NO (tried pkgconfig)
00:01:54.426  Run-time dependency jansson found: NO (tried pkgconfig)
00:01:54.426  Run-time dependency openssl found: YES 3.0.13
00:01:54.426  Run-time dependency libpcap found: NO (tried pkgconfig)
00:01:54.426  Library pcap found: NO
00:01:54.426  Compiler for C supports arguments -Wcast-qual: YES 
00:01:54.426  Compiler for C supports arguments -Wdeprecated: YES 
00:01:54.426  Compiler for C supports arguments -Wformat: YES 
00:01:54.426  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:01:54.426  Compiler for C supports arguments -Wformat-security: YES 
00:01:54.426  Compiler for C supports arguments -Wmissing-declarations: YES 
00:01:54.426  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:01:54.426  Compiler for C supports arguments -Wnested-externs: YES 
00:01:54.426  Compiler for C supports arguments -Wold-style-definition: YES 
00:01:54.426  Compiler for C supports arguments -Wpointer-arith: YES 
00:01:54.426  Compiler for C supports arguments -Wsign-compare: YES 
00:01:54.426  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:01:54.426  Compiler for C supports arguments -Wundef: YES 
00:01:54.426  Compiler for C supports arguments -Wwrite-strings: YES 
00:01:54.426  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:01:54.426  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:01:54.426  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:01:54.426  Compiler for C supports arguments -Wno-zero-length-bounds: YES 
00:01:54.426  Compiler for C supports arguments -mavx512f: YES 
00:01:54.426  Checking if "AVX512 checking" compiles: YES 
00:01:54.426  Fetching value of define "__SSE4_2__" : 1 
00:01:54.426  Fetching value of define "__AES__" : 1 
00:01:54.426  Fetching value of define "__AVX__" : 1 
00:01:54.426  Fetching value of define "__AVX2__" : 1 
00:01:54.426  Fetching value of define "__AVX512BW__" : (undefined) 
00:01:54.426  Fetching value of define "__AVX512CD__" : (undefined) 
00:01:54.426  Fetching value of define "__AVX512DQ__" : (undefined) 
00:01:54.426  Fetching value of define "__AVX512F__" : (undefined) 
00:01:54.426  Fetching value of define "__AVX512VL__" : (undefined) 
00:01:54.426  Fetching value of define "__PCLMUL__" : 1 
00:01:54.426  Fetching value of define "__RDRND__" : 1 
00:01:54.426  Fetching value of define "__RDSEED__" : 1 
00:01:54.426  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:01:54.426  Compiler for C supports arguments -Wno-format-truncation: YES 
00:01:54.426  Message: lib/kvargs: Defining dependency "kvargs"
00:01:54.426  Message: lib/telemetry: Defining dependency "telemetry"
00:01:54.426  Checking for function "getentropy" : YES 
00:01:54.426  Message: lib/eal: Defining dependency "eal"
00:01:54.426  Message: lib/ring: Defining dependency "ring"
00:01:54.426  Message: lib/rcu: Defining dependency "rcu"
00:01:54.426  Message: lib/mempool: Defining dependency "mempool"
00:01:54.426  Message: lib/mbuf: Defining dependency "mbuf"
00:01:54.426  Fetching value of define "__PCLMUL__" : 1 (cached)
00:01:54.426  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:54.426  Compiler for C supports arguments -mpclmul: YES 
00:01:54.426  Compiler for C supports arguments -maes: YES 
00:01:54.426  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:54.426  Compiler for C supports arguments -mavx512bw: YES 
00:01:54.426  Compiler for C supports arguments -mavx512dq: YES 
00:01:54.426  Compiler for C supports arguments -mavx512vl: YES 
00:01:54.426  Compiler for C supports arguments -mvpclmulqdq: YES 
00:01:54.426  Compiler for C supports arguments -mavx2: YES 
00:01:54.426  Compiler for C supports arguments -mavx: YES 
00:01:54.426  Message: lib/net: Defining dependency "net"
00:01:54.426  Message: lib/meter: Defining dependency "meter"
00:01:54.426  Message: lib/ethdev: Defining dependency "ethdev"
00:01:54.426  Message: lib/pci: Defining dependency "pci"
00:01:54.426  Message: lib/cmdline: Defining dependency "cmdline"
00:01:54.426  Message: lib/metrics: Defining dependency "metrics"
00:01:54.426  Message: lib/hash: Defining dependency "hash"
00:01:54.426  Message: lib/timer: Defining dependency "timer"
00:01:54.426  Fetching value of define "__AVX2__" : 1 (cached)
00:01:54.426  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512VL__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512CD__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512BW__" : (undefined) (cached)
00:01:54.426  Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 
00:01:54.426  Message: lib/acl: Defining dependency "acl"
00:01:54.426  Message: lib/bbdev: Defining dependency "bbdev"
00:01:54.426  Message: lib/bitratestats: Defining dependency "bitratestats"
00:01:54.426  Run-time dependency libelf found: YES 0.190
00:01:54.426  lib/bpf/meson.build:43: WARNING: libpcap is missing, rte_bpf_convert API will be disabled
00:01:54.426  Message: lib/bpf: Defining dependency "bpf"
00:01:54.426  Message: lib/cfgfile: Defining dependency "cfgfile"
00:01:54.426  Message: lib/compressdev: Defining dependency "compressdev"
00:01:54.426  Message: lib/cryptodev: Defining dependency "cryptodev"
00:01:54.426  Message: lib/distributor: Defining dependency "distributor"
00:01:54.426  Message: lib/efd: Defining dependency "efd"
00:01:54.426  Message: lib/eventdev: Defining dependency "eventdev"
00:01:54.426  Message: lib/gpudev: Defining dependency "gpudev"
00:01:54.426  Message: lib/gro: Defining dependency "gro"
00:01:54.426  Message: lib/gso: Defining dependency "gso"
00:01:54.426  Message: lib/ip_frag: Defining dependency "ip_frag"
00:01:54.426  Message: lib/jobstats: Defining dependency "jobstats"
00:01:54.426  Message: lib/latencystats: Defining dependency "latencystats"
00:01:54.426  Message: lib/lpm: Defining dependency "lpm"
00:01:54.426  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:01:54.426  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:01:54.426  Message: lib/member: Defining dependency "member"
00:01:54.426  Message: lib/pcapng: Defining dependency "pcapng"
00:01:54.426  Compiler for C supports arguments -Wno-cast-qual: YES 
00:01:54.426  Message: lib/power: Defining dependency "power"
00:01:54.426  Message: lib/rawdev: Defining dependency "rawdev"
00:01:54.426  Message: lib/regexdev: Defining dependency "regexdev"
00:01:54.426  Message: lib/dmadev: Defining dependency "dmadev"
00:01:54.426  Message: lib/rib: Defining dependency "rib"
00:01:54.426  Message: lib/reorder: Defining dependency "reorder"
00:01:54.426  Message: lib/sched: Defining dependency "sched"
00:01:54.426  Message: lib/security: Defining dependency "security"
00:01:54.426  Message: lib/stack: Defining dependency "stack"
00:01:54.426  Has header "linux/userfaultfd.h" : YES 
00:01:54.426  Message: lib/vhost: Defining dependency "vhost"
00:01:54.426  Message: lib/ipsec: Defining dependency "ipsec"
00:01:54.426  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:54.426  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:01:54.426  Compiler for C supports arguments -mavx512f -mavx512dq: YES 
00:01:54.426  Compiler for C supports arguments -mavx512bw: YES (cached)
00:01:54.426  Message: lib/fib: Defining dependency "fib"
00:01:54.426  Message: lib/port: Defining dependency "port"
00:01:54.426  Message: lib/pdump: Defining dependency "pdump"
00:01:54.426  Message: lib/table: Defining dependency "table"
00:01:54.426  Message: lib/pipeline: Defining dependency "pipeline"
00:01:54.426  Message: lib/graph: Defining dependency "graph"
00:01:54.426  Message: lib/node: Defining dependency "node"
00:01:54.427  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:01:54.427  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:01:54.427  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:01:54.427  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:01:54.427  Compiler for C supports arguments -Wno-sign-compare: YES 
00:01:54.427  Compiler for C supports arguments -Wno-unused-value: YES 
00:01:54.427  Compiler for C supports arguments -Wno-format: YES 
00:01:54.427  Compiler for C supports arguments -Wno-format-security: YES 
00:01:55.365  Compiler for C supports arguments -Wno-format-nonliteral: YES 
00:01:55.365  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:01:55.365  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:01:55.365  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:01:55.365  Fetching value of define "__AVX2__" : 1 (cached)
00:01:55.365  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:01:55.365  Compiler for C supports arguments -mavx512f: YES (cached)
00:01:55.365  Compiler for C supports arguments -mavx512bw: YES (cached)
00:01:55.365  Compiler for C supports arguments -march=skylake-avx512: YES 
00:01:55.365  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:01:55.365  Program doxygen found: YES (/usr/bin/doxygen)
00:01:55.365  Configuring doxy-api.conf using configuration
00:01:55.365  Program sphinx-build found: NO
00:01:55.365  Configuring rte_build_config.h using configuration
00:01:55.365  Message: 
00:01:55.365  =================
00:01:55.365  Applications Enabled
00:01:55.365  =================
00:01:55.365  
00:01:55.365  apps:
00:01:55.365  	pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, test-eventdev, 
00:01:55.365  	test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, test-security-perf, 
00:01:55.365  	
00:01:55.365  
00:01:55.365  Message: 
00:01:55.365  =================
00:01:55.365  Libraries Enabled
00:01:55.365  =================
00:01:55.365  
00:01:55.365  libs:
00:01:55.365  	kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 
00:01:55.365  	meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 
00:01:55.365  	bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 
00:01:55.365  	eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 
00:01:55.365  	member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 
00:01:55.365  	sched, security, stack, vhost, ipsec, fib, port, pdump, 
00:01:55.365  	table, pipeline, graph, node, 
00:01:55.365  
00:01:55.365  Message: 
00:01:55.365  ===============
00:01:55.365  Drivers Enabled
00:01:55.365  ===============
00:01:55.365  
00:01:55.365  common:
00:01:55.365  	
00:01:55.365  bus:
00:01:55.365  	pci, vdev, 
00:01:55.365  mempool:
00:01:55.365  	ring, 
00:01:55.365  dma:
00:01:55.365  	
00:01:55.365  net:
00:01:55.365  	i40e, 
00:01:55.365  raw:
00:01:55.365  	
00:01:55.365  crypto:
00:01:55.365  	
00:01:55.365  compress:
00:01:55.365  	
00:01:55.365  regex:
00:01:55.365  	
00:01:55.365  vdpa:
00:01:55.365  	
00:01:55.365  event:
00:01:55.365  	
00:01:55.365  baseband:
00:01:55.365  	
00:01:55.365  gpu:
00:01:55.365  	
00:01:55.365  
00:01:55.365  Message: 
00:01:55.365  =================
00:01:55.365  Content Skipped
00:01:55.365  =================
00:01:55.365  
00:01:55.365  apps:
00:01:55.365  	dumpcap:	missing dependency, "libpcap"
00:01:55.365  	
00:01:55.365  libs:
00:01:55.365  	kni:	explicitly disabled via build config (deprecated lib)
00:01:55.365  	flow_classify:	explicitly disabled via build config (deprecated lib)
00:01:55.365  	
00:01:55.365  drivers:
00:01:55.365  	common/cpt:	not in enabled drivers build config
00:01:55.365  	common/dpaax:	not in enabled drivers build config
00:01:55.365  	common/iavf:	not in enabled drivers build config
00:01:55.365  	common/idpf:	not in enabled drivers build config
00:01:55.365  	common/mvep:	not in enabled drivers build config
00:01:55.365  	common/octeontx:	not in enabled drivers build config
00:01:55.365  	bus/auxiliary:	not in enabled drivers build config
00:01:55.365  	bus/dpaa:	not in enabled drivers build config
00:01:55.365  	bus/fslmc:	not in enabled drivers build config
00:01:55.365  	bus/ifpga:	not in enabled drivers build config
00:01:55.365  	bus/vmbus:	not in enabled drivers build config
00:01:55.365  	common/cnxk:	not in enabled drivers build config
00:01:55.365  	common/mlx5:	not in enabled drivers build config
00:01:55.365  	common/qat:	not in enabled drivers build config
00:01:55.365  	common/sfc_efx:	not in enabled drivers build config
00:01:55.365  	mempool/bucket:	not in enabled drivers build config
00:01:55.365  	mempool/cnxk:	not in enabled drivers build config
00:01:55.365  	mempool/dpaa:	not in enabled drivers build config
00:01:55.365  	mempool/dpaa2:	not in enabled drivers build config
00:01:55.365  	mempool/octeontx:	not in enabled drivers build config
00:01:55.365  	mempool/stack:	not in enabled drivers build config
00:01:55.365  	dma/cnxk:	not in enabled drivers build config
00:01:55.365  	dma/dpaa:	not in enabled drivers build config
00:01:55.365  	dma/dpaa2:	not in enabled drivers build config
00:01:55.365  	dma/hisilicon:	not in enabled drivers build config
00:01:55.365  	dma/idxd:	not in enabled drivers build config
00:01:55.365  	dma/ioat:	not in enabled drivers build config
00:01:55.365  	dma/skeleton:	not in enabled drivers build config
00:01:55.365  	net/af_packet:	not in enabled drivers build config
00:01:55.365  	net/af_xdp:	not in enabled drivers build config
00:01:55.365  	net/ark:	not in enabled drivers build config
00:01:55.365  	net/atlantic:	not in enabled drivers build config
00:01:55.365  	net/avp:	not in enabled drivers build config
00:01:55.365  	net/axgbe:	not in enabled drivers build config
00:01:55.365  	net/bnx2x:	not in enabled drivers build config
00:01:55.365  	net/bnxt:	not in enabled drivers build config
00:01:55.365  	net/bonding:	not in enabled drivers build config
00:01:55.365  	net/cnxk:	not in enabled drivers build config
00:01:55.365  	net/cxgbe:	not in enabled drivers build config
00:01:55.365  	net/dpaa:	not in enabled drivers build config
00:01:55.365  	net/dpaa2:	not in enabled drivers build config
00:01:55.365  	net/e1000:	not in enabled drivers build config
00:01:55.365  	net/ena:	not in enabled drivers build config
00:01:55.365  	net/enetc:	not in enabled drivers build config
00:01:55.365  	net/enetfec:	not in enabled drivers build config
00:01:55.365  	net/enic:	not in enabled drivers build config
00:01:55.365  	net/failsafe:	not in enabled drivers build config
00:01:55.365  	net/fm10k:	not in enabled drivers build config
00:01:55.365  	net/gve:	not in enabled drivers build config
00:01:55.365  	net/hinic:	not in enabled drivers build config
00:01:55.366  	net/hns3:	not in enabled drivers build config
00:01:55.366  	net/iavf:	not in enabled drivers build config
00:01:55.366  	net/ice:	not in enabled drivers build config
00:01:55.366  	net/idpf:	not in enabled drivers build config
00:01:55.366  	net/igc:	not in enabled drivers build config
00:01:55.366  	net/ionic:	not in enabled drivers build config
00:01:55.366  	net/ipn3ke:	not in enabled drivers build config
00:01:55.366  	net/ixgbe:	not in enabled drivers build config
00:01:55.366  	net/kni:	not in enabled drivers build config
00:01:55.366  	net/liquidio:	not in enabled drivers build config
00:01:55.366  	net/mana:	not in enabled drivers build config
00:01:55.366  	net/memif:	not in enabled drivers build config
00:01:55.366  	net/mlx4:	not in enabled drivers build config
00:01:55.366  	net/mlx5:	not in enabled drivers build config
00:01:55.366  	net/mvneta:	not in enabled drivers build config
00:01:55.366  	net/mvpp2:	not in enabled drivers build config
00:01:55.366  	net/netvsc:	not in enabled drivers build config
00:01:55.366  	net/nfb:	not in enabled drivers build config
00:01:55.366  	net/nfp:	not in enabled drivers build config
00:01:55.366  	net/ngbe:	not in enabled drivers build config
00:01:55.366  	net/null:	not in enabled drivers build config
00:01:55.366  	net/octeontx:	not in enabled drivers build config
00:01:55.366  	net/octeon_ep:	not in enabled drivers build config
00:01:55.366  	net/pcap:	not in enabled drivers build config
00:01:55.366  	net/pfe:	not in enabled drivers build config
00:01:55.366  	net/qede:	not in enabled drivers build config
00:01:55.366  	net/ring:	not in enabled drivers build config
00:01:55.366  	net/sfc:	not in enabled drivers build config
00:01:55.366  	net/softnic:	not in enabled drivers build config
00:01:55.366  	net/tap:	not in enabled drivers build config
00:01:55.366  	net/thunderx:	not in enabled drivers build config
00:01:55.366  	net/txgbe:	not in enabled drivers build config
00:01:55.366  	net/vdev_netvsc:	not in enabled drivers build config
00:01:55.366  	net/vhost:	not in enabled drivers build config
00:01:55.366  	net/virtio:	not in enabled drivers build config
00:01:55.366  	net/vmxnet3:	not in enabled drivers build config
00:01:55.366  	raw/cnxk_bphy:	not in enabled drivers build config
00:01:55.366  	raw/cnxk_gpio:	not in enabled drivers build config
00:01:55.366  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:01:55.366  	raw/ifpga:	not in enabled drivers build config
00:01:55.366  	raw/ntb:	not in enabled drivers build config
00:01:55.366  	raw/skeleton:	not in enabled drivers build config
00:01:55.366  	crypto/armv8:	not in enabled drivers build config
00:01:55.366  	crypto/bcmfs:	not in enabled drivers build config
00:01:55.366  	crypto/caam_jr:	not in enabled drivers build config
00:01:55.366  	crypto/ccp:	not in enabled drivers build config
00:01:55.366  	crypto/cnxk:	not in enabled drivers build config
00:01:55.366  	crypto/dpaa_sec:	not in enabled drivers build config
00:01:55.366  	crypto/dpaa2_sec:	not in enabled drivers build config
00:01:55.366  	crypto/ipsec_mb:	not in enabled drivers build config
00:01:55.366  	crypto/mlx5:	not in enabled drivers build config
00:01:55.366  	crypto/mvsam:	not in enabled drivers build config
00:01:55.366  	crypto/nitrox:	not in enabled drivers build config
00:01:55.366  	crypto/null:	not in enabled drivers build config
00:01:55.366  	crypto/octeontx:	not in enabled drivers build config
00:01:55.366  	crypto/openssl:	not in enabled drivers build config
00:01:55.366  	crypto/scheduler:	not in enabled drivers build config
00:01:55.366  	crypto/uadk:	not in enabled drivers build config
00:01:55.366  	crypto/virtio:	not in enabled drivers build config
00:01:55.366  	compress/isal:	not in enabled drivers build config
00:01:55.366  	compress/mlx5:	not in enabled drivers build config
00:01:55.366  	compress/octeontx:	not in enabled drivers build config
00:01:55.366  	compress/zlib:	not in enabled drivers build config
00:01:55.366  	regex/mlx5:	not in enabled drivers build config
00:01:55.366  	regex/cn9k:	not in enabled drivers build config
00:01:55.366  	vdpa/ifc:	not in enabled drivers build config
00:01:55.366  	vdpa/mlx5:	not in enabled drivers build config
00:01:55.366  	vdpa/sfc:	not in enabled drivers build config
00:01:55.366  	event/cnxk:	not in enabled drivers build config
00:01:55.366  	event/dlb2:	not in enabled drivers build config
00:01:55.366  	event/dpaa:	not in enabled drivers build config
00:01:55.366  	event/dpaa2:	not in enabled drivers build config
00:01:55.366  	event/dsw:	not in enabled drivers build config
00:01:55.366  	event/opdl:	not in enabled drivers build config
00:01:55.366  	event/skeleton:	not in enabled drivers build config
00:01:55.366  	event/sw:	not in enabled drivers build config
00:01:55.366  	event/octeontx:	not in enabled drivers build config
00:01:55.366  	baseband/acc:	not in enabled drivers build config
00:01:55.366  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:01:55.366  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:01:55.366  	baseband/la12xx:	not in enabled drivers build config
00:01:55.366  	baseband/null:	not in enabled drivers build config
00:01:55.366  	baseband/turbo_sw:	not in enabled drivers build config
00:01:55.366  	gpu/cuda:	not in enabled drivers build config
00:01:55.366  	
00:01:55.366  
00:01:55.366  Build targets in project: 313
00:01:55.366  
00:01:55.366  DPDK 22.11.4
00:01:55.366  
00:01:55.366    User defined options
00:01:55.366      libdir        : lib
00:01:55.366      prefix        : /home/vagrant/spdk_repo/dpdk/build
00:01:55.366      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:01:55.366      c_link_args   : 
00:01:55.366      enable_docs   : false
00:01:55.366      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,
00:01:55.366      enable_kmods  : false
00:01:55.366      machine       : native
00:01:55.366      tests         : false
00:01:55.366  
00:01:55.366  Found ninja-1.11.1.git.kitware.jobserver-1 at /var/spdk/dependencies/pip/bin/ninja
00:01:55.366  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:01:55.366   05:47:16 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10
00:01:55.366  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:01:55.366  [1/740] Generating lib/rte_kvargs_mingw with a custom command
00:01:55.366  [2/740] Generating lib/rte_kvargs_def with a custom command
00:01:55.625  [3/740] Generating lib/rte_telemetry_mingw with a custom command
00:01:55.625  [4/740] Generating lib/rte_telemetry_def with a custom command
00:01:55.625  [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:01:55.625  [6/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:01:55.625  [7/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:01:55.625  [8/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:01:55.625  [9/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:01:55.625  [10/740] Linking static target lib/librte_kvargs.a
00:01:55.625  [11/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:01:55.625  [12/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:01:55.625  [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:01:55.625  [14/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:01:55.884  [15/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:01:55.884  [16/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:01:55.884  [17/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:01:55.884  [18/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:01:55.884  [19/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:01:55.884  [20/740] Linking target lib/librte_kvargs.so.23.0
00:01:55.884  [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:01:55.884  [22/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o
00:01:55.884  [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:01:55.884  [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:01:56.142  [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:01:56.142  [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:01:56.142  [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:01:56.142  [28/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:01:56.142  [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:01:56.142  [30/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:01:56.142  [31/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:01:56.142  [32/740] Linking static target lib/librte_telemetry.a
00:01:56.142  [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:01:56.401  [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:01:56.401  [35/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:01:56.401  [36/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols
00:01:56.401  [37/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:01:56.401  [38/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:01:56.401  [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:01:56.401  [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:01:56.401  [41/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:01:56.677  [42/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:01:56.677  [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:01:56.677  [44/740] Linking target lib/librte_telemetry.so.23.0
00:01:56.677  [45/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:01:56.677  [46/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:01:56.677  [47/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols
00:01:56.677  [48/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:01:56.677  [49/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:01:56.938  [50/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:01:56.938  [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:01:56.938  [52/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:01:56.938  [53/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:01:56.938  [54/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:01:56.938  [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:01:56.938  [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:01:56.938  [57/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:01:56.938  [58/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:01:56.938  [59/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:01:56.938  [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:01:56.938  [61/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:01:56.938  [62/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:01:56.938  [63/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:01:57.197  [64/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:01:57.197  [65/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o
00:01:57.197  [66/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:01:57.197  [67/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:01:57.197  [68/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:01:57.197  [69/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:01:57.197  [70/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:01:57.197  [71/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:01:57.197  [72/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:01:57.197  [73/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:01:57.197  [74/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:01:57.456  [75/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:01:57.456  [76/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:01:57.456  [77/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:01:57.456  [78/740] Generating lib/rte_eal_def with a custom command
00:01:57.456  [79/740] Generating lib/rte_eal_mingw with a custom command
00:01:57.456  [80/740] Generating lib/rte_ring_def with a custom command
00:01:57.456  [81/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:01:57.456  [82/740] Generating lib/rte_ring_mingw with a custom command
00:01:57.456  [83/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:01:57.456  [84/740] Generating lib/rte_rcu_def with a custom command
00:01:57.456  [85/740] Generating lib/rte_rcu_mingw with a custom command
00:01:57.456  [86/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:01:57.456  [87/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:01:57.714  [88/740] Linking static target lib/librte_ring.a
00:01:57.714  [89/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:01:57.714  [90/740] Generating lib/rte_mempool_def with a custom command
00:01:57.714  [91/740] Generating lib/rte_mempool_mingw with a custom command
00:01:57.714  [92/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:01:57.714  [93/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:01:57.714  [94/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:01:57.973  [95/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:01:57.973  [96/740] Linking static target lib/librte_eal.a
00:01:58.232  [97/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:01:58.232  [98/740] Generating lib/rte_mbuf_def with a custom command
00:01:58.232  [99/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:01:58.232  [100/740] Generating lib/rte_mbuf_mingw with a custom command
00:01:58.232  [101/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:01:58.232  [102/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:01:58.488  [103/740] Linking static target lib/librte_rcu.a
00:01:58.488  [104/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:01:58.488  [105/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:01:58.746  [106/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:01:58.746  [107/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:01:58.746  [108/740] Linking static target lib/librte_mempool.a
00:01:58.746  [109/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:01:58.746  [110/740] Generating lib/rte_net_def with a custom command
00:01:58.746  [111/740] Generating lib/rte_net_mingw with a custom command
00:01:58.746  [112/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:01:59.004  [113/740] Linking static target lib/net/libnet_crc_avx512_lib.a
00:01:59.004  [114/740] Generating lib/rte_meter_def with a custom command
00:01:59.004  [115/740] Generating lib/rte_meter_mingw with a custom command
00:01:59.004  [116/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:01:59.004  [117/740] Linking static target lib/librte_meter.a
00:01:59.004  [118/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:01:59.004  [119/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:01:59.262  [120/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.262  [121/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:01:59.262  [122/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:01:59.262  [123/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:01:59.262  [124/740] Linking static target lib/librte_net.a
00:01:59.520  [125/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:01:59.521  [126/740] Linking static target lib/librte_mbuf.a
00:01:59.521  [127/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.521  [128/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:01:59.778  [129/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:01:59.778  [130/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:00.035  [131/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:00.035  [132/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:00.035  [133/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:00.035  [134/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:00.292  [135/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:00.857  [136/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:00.857  [137/740] Generating lib/rte_ethdev_def with a custom command
00:02:00.857  [138/740] Generating lib/rte_ethdev_mingw with a custom command
00:02:00.857  [139/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:00.857  [140/740] Generating lib/rte_pci_def with a custom command
00:02:00.857  [141/740] Generating lib/rte_pci_mingw with a custom command
00:02:00.857  [142/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:00.857  [143/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:00.857  [144/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:00.857  [145/740] Linking static target lib/librte_pci.a
00:02:00.857  [146/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:00.857  [147/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:01.114  [148/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:01.114  [149/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:01.114  [150/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:01.114  [151/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:01.114  [152/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:01.114  [153/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:01.114  [154/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:01.372  [155/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:01.372  [156/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:01.372  [157/740] Generating lib/rte_cmdline_def with a custom command
00:02:01.372  [158/740] Generating lib/rte_cmdline_mingw with a custom command
00:02:01.372  [159/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:01.372  [160/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:01.372  [161/740] Generating lib/rte_metrics_def with a custom command
00:02:01.372  [162/740] Generating lib/rte_metrics_mingw with a custom command
00:02:01.372  [163/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:01.372  [164/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:02:01.630  [165/740] Generating lib/rte_hash_def with a custom command
00:02:01.630  [166/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:01.630  [167/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:01.630  [168/740] Generating lib/rte_hash_mingw with a custom command
00:02:01.630  [169/740] Generating lib/rte_timer_def with a custom command
00:02:01.630  [170/740] Generating lib/rte_timer_mingw with a custom command
00:02:01.630  [171/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:01.630  [172/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:01.630  [173/740] Linking static target lib/librte_cmdline.a
00:02:02.196  [174/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:02:02.196  [175/740] Linking static target lib/librte_metrics.a
00:02:02.196  [176/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:02.196  [177/740] Linking static target lib/librte_timer.a
00:02:02.454  [178/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.454  [179/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.712  [180/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:02.712  [181/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:02:02.712  [182/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:02.712  [183/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:02.712  [184/740] Linking static target lib/librte_ethdev.a
00:02:03.278  [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:02:03.278  [186/740] Generating lib/rte_acl_def with a custom command
00:02:03.278  [187/740] Generating lib/rte_acl_mingw with a custom command
00:02:03.278  [188/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:02:03.278  [189/740] Generating lib/rte_bbdev_def with a custom command
00:02:03.278  [190/740] Generating lib/rte_bbdev_mingw with a custom command
00:02:03.536  [191/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:02:03.536  [192/740] Generating lib/rte_bitratestats_def with a custom command
00:02:03.536  [193/740] Generating lib/rte_bitratestats_mingw with a custom command
00:02:03.793  [194/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:02:04.050  [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:02:04.050  [196/740] Linking static target lib/librte_bitratestats.a
00:02:04.308  [197/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:02:04.308  [198/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:04.308  [199/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:02:04.308  [200/740] Linking static target lib/librte_bbdev.a
00:02:04.565  [201/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:02:04.565  [202/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:04.565  [203/740] Linking static target lib/librte_hash.a
00:02:04.823  [204/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:02:04.823  [205/740] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o
00:02:04.823  [206/740] Linking static target lib/acl/libavx512_tmp.a
00:02:05.081  [207/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.081  [208/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:02:05.081  [209/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:02:05.338  [210/740] Generating lib/rte_bpf_def with a custom command
00:02:05.338  [211/740] Generating lib/rte_bpf_mingw with a custom command
00:02:05.338  [212/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.338  [213/740] Linking target lib/librte_eal.so.23.0
00:02:05.338  [214/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:05.338  [215/740] Generating lib/rte_cfgfile_def with a custom command
00:02:05.338  [216/740] Generating lib/rte_cfgfile_mingw with a custom command
00:02:05.338  [217/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols
00:02:05.596  [218/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:02:05.596  [219/740] Linking target lib/librte_ring.so.23.0
00:02:05.596  [220/740] Linking target lib/librte_meter.so.23.0
00:02:05.596  [221/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols
00:02:05.596  [222/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:02:05.596  [223/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols
00:02:05.596  [224/740] Linking target lib/librte_mempool.so.23.0
00:02:05.596  [225/740] Linking target lib/librte_rcu.so.23.0
00:02:05.596  [226/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o
00:02:05.596  [227/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:02:05.596  [228/740] Linking target lib/librte_pci.so.23.0
00:02:05.596  [229/740] Linking static target lib/librte_acl.a
00:02:05.596  [230/740] Linking target lib/librte_timer.so.23.0
00:02:05.854  [231/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols
00:02:05.854  [232/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols
00:02:05.854  [233/740] Linking static target lib/librte_cfgfile.a
00:02:05.854  [234/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols
00:02:05.854  [235/740] Linking target lib/librte_mbuf.so.23.0
00:02:05.854  [236/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols
00:02:05.854  [237/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:02:05.854  [238/740] Generating lib/rte_compressdev_def with a custom command
00:02:05.854  [239/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols
00:02:05.854  [240/740] Generating lib/rte_compressdev_mingw with a custom command
00:02:05.854  [241/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.112  [242/740] Linking target lib/librte_net.so.23.0
00:02:06.112  [243/740] Linking target lib/librte_bbdev.so.23.0
00:02:06.112  [244/740] Linking target lib/librte_acl.so.23.0
00:02:06.112  [245/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.112  [246/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols
00:02:06.112  [247/740] Linking target lib/librte_cfgfile.so.23.0
00:02:06.112  [248/740] Linking target lib/librte_cmdline.so.23.0
00:02:06.112  [249/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols
00:02:06.112  [250/740] Linking target lib/librte_hash.so.23.0
00:02:06.112  [251/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:02:06.112  [252/740] Generating lib/rte_cryptodev_def with a custom command
00:02:06.370  [253/740] Generating lib/rte_cryptodev_mingw with a custom command
00:02:06.370  [254/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols
00:02:06.370  [255/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:06.370  [256/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:02:06.370  [257/740] Linking static target lib/librte_bpf.a
00:02:06.629  [258/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:06.629  [259/740] Generating lib/rte_distributor_def with a custom command
00:02:06.629  [260/740] Generating lib/rte_distributor_mingw with a custom command
00:02:06.629  [261/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:06.629  [262/740] Linking static target lib/librte_compressdev.a
00:02:06.887  [263/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:06.887  [264/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:06.887  [265/740] Generating lib/rte_efd_def with a custom command
00:02:06.887  [266/740] Generating lib/rte_efd_mingw with a custom command
00:02:07.145  [267/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:02:07.145  [268/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:07.145  [269/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:02:07.402  [270/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.402  [271/740] Linking target lib/librte_ethdev.so.23.0
00:02:07.402  [272/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:02:07.402  [273/740] Linking static target lib/librte_distributor.a
00:02:07.402  [274/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols
00:02:07.402  [275/740] Linking target lib/librte_metrics.so.23.0
00:02:07.660  [276/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.660  [277/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:02:07.660  [278/740] Linking target lib/librte_compressdev.so.23.0
00:02:07.660  [279/740] Linking target lib/librte_bpf.so.23.0
00:02:07.661  [280/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols
00:02:07.661  [281/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:02:07.661  [282/740] Linking target lib/librte_bitratestats.so.23.0
00:02:07.661  [283/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:02:07.661  [284/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols
00:02:07.661  [285/740] Linking target lib/librte_distributor.so.23.0
00:02:07.661  [286/740] Generating lib/rte_eventdev_def with a custom command
00:02:07.661  [287/740] Generating lib/rte_eventdev_mingw with a custom command
00:02:07.661  [288/740] Generating lib/rte_gpudev_def with a custom command
00:02:07.919  [289/740] Generating lib/rte_gpudev_mingw with a custom command
00:02:08.177  [290/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:02:08.434  [291/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:02:08.434  [292/740] Linking static target lib/librte_efd.a
00:02:08.691  [293/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:02:08.691  [294/740] Linking target lib/librte_efd.so.23.0
00:02:08.691  [295/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:02:08.691  [296/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:02:08.691  [297/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:02:08.691  [298/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:02:08.691  [299/740] Linking static target lib/librte_gpudev.a
00:02:08.691  [300/740] Generating lib/rte_gro_def with a custom command
00:02:08.949  [301/740] Generating lib/rte_gro_mingw with a custom command
00:02:08.949  [302/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:08.949  [303/740] Linking static target lib/librte_cryptodev.a
00:02:09.207  [304/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:02:09.207  [305/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:02:09.466  [306/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:02:09.466  [307/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:02:09.725  [308/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.725  [309/740] Linking target lib/librte_gpudev.so.23.0
00:02:09.725  [310/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:02:09.725  [311/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:02:09.725  [312/740] Linking static target lib/librte_gro.a
00:02:09.725  [313/740] Generating lib/rte_gso_def with a custom command
00:02:09.725  [314/740] Generating lib/rte_gso_mingw with a custom command
00:02:09.984  [315/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:02:09.984  [316/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:02:09.984  [317/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:02:09.984  [318/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:02:09.984  [319/740] Linking target lib/librte_gro.so.23.0
00:02:10.243  [320/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:02:10.243  [321/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:02:10.243  [322/740] Generating lib/rte_ip_frag_def with a custom command
00:02:10.243  [323/740] Generating lib/rte_ip_frag_mingw with a custom command
00:02:10.502  [324/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:02:10.502  [325/740] Linking static target lib/librte_gso.a
00:02:10.502  [326/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:02:10.502  [327/740] Linking static target lib/librte_eventdev.a
00:02:10.502  [328/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:02:10.502  [329/740] Linking static target lib/librte_jobstats.a
00:02:10.502  [330/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:02:10.502  [331/740] Generating lib/rte_jobstats_def with a custom command
00:02:10.502  [332/740] Linking target lib/librte_gso.so.23.0
00:02:10.502  [333/740] Generating lib/rte_jobstats_mingw with a custom command
00:02:10.760  [334/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:02:10.760  [335/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:02:10.760  [336/740] Generating lib/rte_latencystats_def with a custom command
00:02:10.760  [337/740] Generating lib/rte_latencystats_mingw with a custom command
00:02:10.760  [338/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:02:11.019  [339/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.019  [340/740] Generating lib/rte_lpm_def with a custom command
00:02:11.019  [341/740] Linking target lib/librte_jobstats.so.23.0
00:02:11.019  [342/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:02:11.019  [343/740] Generating lib/rte_lpm_mingw with a custom command
00:02:11.019  [344/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:02:11.019  [345/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:02:11.019  [346/740] Linking static target lib/librte_ip_frag.a
00:02:11.278  [347/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.278  [348/740] Linking target lib/librte_cryptodev.so.23.0
00:02:11.278  [349/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols
00:02:11.537  [350/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.537  [351/740] Linking target lib/librte_ip_frag.so.23.0
00:02:11.537  [352/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:02:11.537  [353/740] Linking static target lib/librte_latencystats.a
00:02:11.537  [354/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols
00:02:11.537  [355/740] Generating lib/rte_member_def with a custom command
00:02:11.537  [356/740] Generating lib/rte_member_mingw with a custom command
00:02:11.796  [357/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:02:11.796  [358/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:02:11.796  [359/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:02:11.796  [360/740] Linking target lib/librte_latencystats.so.23.0
00:02:11.796  [361/740] Linking static target lib/member/libsketch_avx512_tmp.a
00:02:11.796  [362/740] Generating lib/rte_pcapng_def with a custom command
00:02:11.796  [363/740] Generating lib/rte_pcapng_mingw with a custom command
00:02:11.796  [364/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:02:11.796  [365/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o
00:02:11.796  [366/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:02:12.055  [367/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o
00:02:12.055  [368/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:02:12.055  [369/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o
00:02:12.314  [370/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:12.314  [371/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:02:12.314  [372/740] Linking static target lib/librte_lpm.a
00:02:12.314  [373/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o
00:02:12.314  [374/740] Linking target lib/librte_eventdev.so.23.0
00:02:12.314  [375/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:02:12.314  [376/740] Generating lib/rte_power_def with a custom command
00:02:12.572  [377/740] Generating lib/rte_power_mingw with a custom command
00:02:12.572  [378/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols
00:02:12.572  [379/740] Generating lib/rte_rawdev_def with a custom command
00:02:12.572  [380/740] Generating lib/rte_rawdev_mingw with a custom command
00:02:12.572  [381/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o
00:02:12.572  [382/740] Generating lib/rte_regexdev_def with a custom command
00:02:12.572  [383/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:02:12.831  [384/740] Generating lib/rte_regexdev_mingw with a custom command
00:02:12.831  [385/740] Linking target lib/librte_lpm.so.23.0
00:02:12.831  [386/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o
00:02:12.831  [387/740] Generating lib/rte_dmadev_def with a custom command
00:02:12.831  [388/740] Generating lib/rte_dmadev_mingw with a custom command
00:02:12.831  [389/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:02:12.831  [390/740] Linking static target lib/librte_pcapng.a
00:02:12.831  [391/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols
00:02:12.831  [392/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:02:12.831  [393/740] Linking static target lib/librte_rawdev.a
00:02:12.831  [394/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o
00:02:12.831  [395/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o
00:02:12.831  [396/740] Generating lib/rte_rib_def with a custom command
00:02:13.089  [397/740] Generating lib/rte_rib_mingw with a custom command
00:02:13.089  [398/740] Generating lib/rte_reorder_def with a custom command
00:02:13.089  [399/740] Generating lib/rte_reorder_mingw with a custom command
00:02:13.089  [400/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.089  [401/740] Linking target lib/librte_pcapng.so.23.0
00:02:13.089  [402/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:13.089  [403/740] Linking static target lib/librte_dmadev.a
00:02:13.348  [404/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:02:13.348  [405/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols
00:02:13.348  [406/740] Linking static target lib/librte_power.a
00:02:13.348  [407/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.348  [408/740] Linking target lib/librte_rawdev.so.23.0
00:02:13.348  [409/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:02:13.348  [410/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:02:13.605  [411/740] Linking static target lib/librte_regexdev.a
00:02:13.605  [412/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:02:13.605  [413/740] Generating lib/rte_sched_def with a custom command
00:02:13.605  [414/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:02:13.605  [415/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:02:13.605  [416/740] Linking static target lib/librte_member.a
00:02:13.605  [417/740] Generating lib/rte_sched_mingw with a custom command
00:02:13.605  [418/740] Generating lib/rte_security_def with a custom command
00:02:13.605  [419/740] Generating lib/rte_security_mingw with a custom command
00:02:13.605  [420/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.863  [421/740] Linking target lib/librte_dmadev.so.23.0
00:02:13.863  [422/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:02:13.863  [423/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:02:13.863  [424/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:02:13.863  [425/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:02:13.863  [426/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols
00:02:13.863  [427/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:13.863  [428/740] Generating lib/rte_stack_def with a custom command
00:02:13.863  [429/740] Linking static target lib/librte_reorder.a
00:02:13.863  [430/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:02:13.863  [431/740] Linking static target lib/librte_stack.a
00:02:13.863  [432/740] Generating lib/rte_stack_mingw with a custom command
00:02:13.863  [433/740] Linking target lib/librte_member.so.23.0
00:02:14.121  [434/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:02:14.121  [435/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.121  [436/740] Linking target lib/librte_stack.so.23.0
00:02:14.121  [437/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.121  [438/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:02:14.121  [439/740] Linking static target lib/librte_rib.a
00:02:14.121  [440/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.121  [441/740] Linking target lib/librte_reorder.so.23.0
00:02:14.121  [442/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.379  [443/740] Linking target lib/librte_regexdev.so.23.0
00:02:14.379  [444/740] Linking target lib/librte_power.so.23.0
00:02:14.637  [445/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:14.637  [446/740] Linking static target lib/librte_security.a
00:02:14.637  [447/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.637  [448/740] Linking target lib/librte_rib.so.23.0
00:02:14.637  [449/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols
00:02:14.895  [450/740] Generating lib/rte_vhost_def with a custom command
00:02:14.895  [451/740] Generating lib/rte_vhost_mingw with a custom command
00:02:14.895  [452/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:02:14.895  [453/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:14.895  [454/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:02:14.895  [455/740] Linking target lib/librte_security.so.23.0
00:02:15.154  [456/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:02:15.154  [457/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols
00:02:15.412  [458/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:02:15.412  [459/740] Linking static target lib/librte_sched.a
00:02:15.671  [460/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:02:15.671  [461/740] Linking target lib/librte_sched.so.23.0
00:02:15.671  [462/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:02:15.929  [463/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols
00:02:15.929  [464/740] Generating lib/rte_ipsec_def with a custom command
00:02:15.929  [465/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:02:15.929  [466/740] Generating lib/rte_ipsec_mingw with a custom command
00:02:15.929  [467/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:02:15.929  [468/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:02:16.188  [469/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:02:16.188  [470/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:02:16.446  [471/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:02:16.703  [472/740] Generating lib/rte_fib_def with a custom command
00:02:16.703  [473/740] Generating lib/rte_fib_mingw with a custom command
00:02:16.703  [474/740] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o
00:02:16.703  [475/740] Linking static target lib/fib/libtrie_avx512_tmp.a
00:02:16.703  [476/740] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o
00:02:16.703  [477/740] Linking static target lib/fib/libdir24_8_avx512_tmp.a
00:02:16.703  [478/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:02:16.961  [479/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:02:16.961  [480/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:02:16.961  [481/740] Linking static target lib/librte_ipsec.a
00:02:17.219  [482/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.219  [483/740] Linking target lib/librte_ipsec.so.23.0
00:02:17.480  [484/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:02:17.739  [485/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:02:17.739  [486/740] Linking static target lib/librte_fib.a
00:02:17.739  [487/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:02:17.739  [488/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:02:17.739  [489/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:02:17.739  [490/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:02:17.997  [491/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:02:17.997  [492/740] Linking target lib/librte_fib.so.23.0
00:02:17.997  [493/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:02:18.563  [494/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:02:18.563  [495/740] Generating lib/rte_port_def with a custom command
00:02:18.563  [496/740] Generating lib/rte_port_mingw with a custom command
00:02:18.821  [497/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:02:18.821  [498/740] Generating lib/rte_pdump_def with a custom command
00:02:18.821  [499/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:02:18.821  [500/740] Generating lib/rte_pdump_mingw with a custom command
00:02:18.821  [501/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:02:19.077  [502/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:02:19.077  [503/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:02:19.077  [504/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:02:19.335  [505/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:02:19.335  [506/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:02:19.335  [507/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:02:19.335  [508/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:02:19.335  [509/740] Linking static target lib/librte_port.a
00:02:19.899  [510/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:02:19.900  [511/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:02:19.900  [512/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:02:19.900  [513/740] Linking target lib/librte_port.so.23.0
00:02:19.900  [514/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:02:20.157  [515/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:02:20.157  [516/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols
00:02:20.157  [517/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:02:20.158  [518/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:02:20.158  [519/740] Linking static target lib/librte_pdump.a
00:02:20.415  [520/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:02:20.415  [521/740] Linking target lib/librte_pdump.so.23.0
00:02:20.673  [522/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:02:20.930  [523/740] Generating lib/rte_table_def with a custom command
00:02:20.930  [524/740] Generating lib/rte_table_mingw with a custom command
00:02:20.930  [525/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:02:20.930  [526/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:02:21.187  [527/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:02:21.187  [528/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:02:21.187  [529/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:02:21.445  [530/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:02:21.445  [531/740] Generating lib/rte_pipeline_def with a custom command
00:02:21.445  [532/740] Generating lib/rte_pipeline_mingw with a custom command
00:02:21.445  [533/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:02:21.729  [534/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:02:21.729  [535/740] Linking static target lib/librte_table.a
00:02:21.729  [536/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:02:22.305  [537/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:02:22.305  [538/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:02:22.305  [539/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:02:22.305  [540/740] Linking target lib/librte_table.so.23.0
00:02:22.305  [541/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:02:22.305  [542/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols
00:02:22.563  [543/740] Generating lib/rte_graph_def with a custom command
00:02:22.563  [544/740] Generating lib/rte_graph_mingw with a custom command
00:02:22.563  [545/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:02:22.821  [546/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:02:22.821  [547/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:02:23.079  [548/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:02:23.079  [549/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:02:23.079  [550/740] Linking static target lib/librte_graph.a
00:02:23.337  [551/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:02:23.337  [552/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:02:23.595  [553/740] Compiling C object lib/librte_node.a.p/node_null.c.o
00:02:23.595  [554/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:02:23.854  [555/740] Compiling C object lib/librte_node.a.p/node_log.c.o
00:02:24.112  [556/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.112  [557/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:02:24.112  [558/740] Generating lib/rte_node_def with a custom command
00:02:24.112  [559/740] Generating lib/rte_node_mingw with a custom command
00:02:24.112  [560/740] Linking target lib/librte_graph.so.23.0
00:02:24.112  [561/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:24.112  [562/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols
00:02:24.370  [563/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:02:24.370  [564/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:24.370  [565/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:02:24.370  [566/740] Generating drivers/rte_bus_pci_def with a custom command
00:02:24.370  [567/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:24.370  [568/740] Generating drivers/rte_bus_pci_mingw with a custom command
00:02:24.370  [569/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:24.628  [570/740] Generating drivers/rte_bus_vdev_def with a custom command
00:02:24.628  [571/740] Generating drivers/rte_bus_vdev_mingw with a custom command
00:02:24.628  [572/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:02:24.628  [573/740] Generating drivers/rte_mempool_ring_def with a custom command
00:02:24.628  [574/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:02:24.628  [575/740] Generating drivers/rte_mempool_ring_mingw with a custom command
00:02:24.628  [576/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:02:24.628  [577/740] Linking static target lib/librte_node.a
00:02:24.628  [578/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:24.886  [579/740] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:24.886  [580/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:02:24.886  [581/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:02:24.886  [582/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:24.887  [583/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:24.887  [584/740] Linking static target drivers/librte_bus_vdev.a
00:02:25.145  [585/740] Linking target lib/librte_node.so.23.0
00:02:25.145  [586/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:02:25.145  [587/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:25.145  [588/740] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:25.145  [589/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.145  [590/740] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:25.145  [591/740] Linking target drivers/librte_bus_vdev.so.23.0
00:02:25.145  [592/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:25.403  [593/740] Linking static target drivers/librte_bus_pci.a
00:02:25.403  [594/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:25.403  [595/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols
00:02:25.661  [596/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:25.661  [597/740] Linking target drivers/librte_bus_pci.so.23.0
00:02:25.661  [598/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:02:25.661  [599/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols
00:02:25.919  [600/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:02:25.919  [601/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:02:25.919  [602/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:26.177  [603/740] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:26.177  [604/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:02:26.177  [605/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:26.177  [606/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:26.177  [607/740] Linking static target drivers/librte_mempool_ring.a
00:02:26.177  [608/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:26.178  [609/740] Linking target drivers/librte_mempool_ring.so.23.0
00:02:26.744  [610/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:02:27.310  [611/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:02:27.310  [612/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:02:27.310  [613/740] Linking static target drivers/net/i40e/base/libi40e_base.a
00:02:27.876  [614/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:02:27.876  [615/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:02:27.876  [616/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:02:28.446  [617/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:02:28.703  [618/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:02:28.961  [619/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:02:28.961  [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:02:28.961  [621/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:02:28.961  [622/740] Generating drivers/rte_net_i40e_def with a custom command
00:02:28.961  [623/740] Generating drivers/rte_net_i40e_mingw with a custom command
00:02:28.961  [624/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:02:30.335  [625/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:02:30.335  [626/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:02:30.335  [627/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:02:30.593  [628/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:02:30.593  [629/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:02:30.593  [630/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:02:30.593  [631/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:02:30.593  [632/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:02:30.852  [633/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:02:31.111  [634/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o
00:02:31.370  [635/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:02:31.370  [636/740] Linking static target drivers/libtmp_rte_net_i40e.a
00:02:31.628  [637/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:02:31.887  [638/740] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:02:31.887  [639/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:02:31.887  [640/740] Linking static target drivers/librte_net_i40e.a
00:02:31.887  [641/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:02:31.887  [642/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:02:31.888  [643/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:02:31.888  [644/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:02:31.888  [645/740] Linking static target lib/librte_vhost.a
00:02:32.147  [646/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:02:32.406  [647/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:02:32.406  [648/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:02:32.406  [649/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:02:32.663  [650/740] Linking target drivers/librte_net_i40e.so.23.0
00:02:32.663  [651/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:02:32.921  [652/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:02:33.179  [653/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:02:33.179  [654/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:02:33.179  [655/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:02:33.179  [656/740] Linking target lib/librte_vhost.so.23.0
00:02:33.437  [657/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:02:33.695  [658/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:02:33.953  [659/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:02:33.953  [660/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:02:33.953  [661/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:02:33.953  [662/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:02:33.953  [663/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:02:34.210  [664/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:02:34.210  [665/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:02:34.210  [666/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:02:34.210  [667/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:02:34.812  [668/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:02:34.812  [669/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:02:35.082  [670/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:02:35.082  [671/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:02:35.649  [672/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:02:35.649  [673/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:02:35.907  [674/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:02:36.165  [675/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:02:36.165  [676/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:02:36.423  [677/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:02:36.423  [678/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:02:36.423  [679/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:02:36.680  [680/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:02:36.938  [681/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:02:36.938  [682/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:02:36.938  [683/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:02:37.196  [684/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:02:37.196  [685/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:02:37.196  [686/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:02:37.197  [687/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:02:37.455  [688/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:02:37.455  [689/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:02:37.713  [690/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:02:37.713  [691/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:02:37.713  [692/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:02:38.279  [693/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:02:38.279  [694/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:02:38.537  [695/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:02:38.537  [696/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:02:38.795  [697/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:02:39.361  [698/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:02:39.361  [699/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:02:39.361  [700/740] Linking static target lib/librte_pipeline.a
00:02:39.361  [701/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:02:39.619  [702/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:02:39.619  [703/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:02:39.878  [704/740] Linking target app/dpdk-pdump
00:02:39.878  [705/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:02:39.878  [706/740] Linking target app/dpdk-proc-info
00:02:39.878  [707/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:02:39.878  [708/740] Linking target app/dpdk-test-acl
00:02:40.136  [709/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:02:40.136  [710/740] Linking target app/dpdk-test-bbdev
00:02:40.136  [711/740] Linking target app/dpdk-test-cmdline
00:02:40.394  [712/740] Linking target app/dpdk-test-compress-perf
00:02:40.394  [713/740] Linking target app/dpdk-test-crypto-perf
00:02:40.394  [714/740] Linking target app/dpdk-test-eventdev
00:02:40.394  [715/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:02:40.651  [716/740] Linking target app/dpdk-test-fib
00:02:40.651  [717/740] Linking target app/dpdk-test-flow-perf
00:02:40.651  [718/740] Linking target app/dpdk-test-gpudev
00:02:40.651  [719/740] Linking target app/dpdk-test-pipeline
00:02:41.216  [720/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:02:41.474  [721/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:02:41.474  [722/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:02:41.474  [723/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:02:41.732  [724/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:02:41.732  [725/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:02:41.990  [726/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:02:41.990  [727/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:42.249  [728/740] Linking target lib/librte_pipeline.so.23.0
00:02:42.507  [729/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:02:42.507  [730/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:02:42.766  [731/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:02:42.766  [732/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:02:42.766  [733/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:02:42.766  [734/740] Linking target app/dpdk-test-sad
00:02:43.333  [735/740] Linking target app/dpdk-test-regex
00:02:43.333  [736/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:02:43.333  [737/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:02:43.333  [738/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:02:43.591  [739/740] Linking target app/dpdk-testpmd
00:02:43.849  [740/740] Linking target app/dpdk-test-security-perf
00:02:43.849    05:48:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s
00:02:43.849   05:48:04 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:02:43.849   05:48:04 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install
00:02:43.849  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:02:43.849  [0/1] Installing files.
00:02:44.108  Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.108  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.109  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.371  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.372  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.373  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.374  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:44.375  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:02:44.375  Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.375  Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:44.638  Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:44.638  Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:44.638  Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.638  Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0
00:02:44.638  Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.638  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.639  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.640  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:02:44.641  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:02:44.641  Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23
00:02:44.641  Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so
00:02:44.641  Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23
00:02:44.641  Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so
00:02:44.641  Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23
00:02:44.641  Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so
00:02:44.641  Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23
00:02:44.641  Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so
00:02:44.641  Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23
00:02:44.641  Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so
00:02:44.641  Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23
00:02:44.641  Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so
00:02:44.641  Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23
00:02:44.641  Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so
00:02:44.641  Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23
00:02:44.641  Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so
00:02:44.641  Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23
00:02:44.641  Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so
00:02:44.641  Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23
00:02:44.641  Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so
00:02:44.641  Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23
00:02:44.641  Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so
00:02:44.641  Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23
00:02:44.641  Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so
00:02:44.642  Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23
00:02:44.642  Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so
00:02:44.642  Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23
00:02:44.642  Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so
00:02:44.642  Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23
00:02:44.642  Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so
00:02:44.642  Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23
00:02:44.642  Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so
00:02:44.642  Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23
00:02:44.642  Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so
00:02:44.642  Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23
00:02:44.642  Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so
00:02:44.642  Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23
00:02:44.642  Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so
00:02:44.642  Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23
00:02:44.642  Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so
00:02:44.642  Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23
00:02:44.642  Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so
00:02:44.642  Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23
00:02:44.642  Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so
00:02:44.642  Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23
00:02:44.642  Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so
00:02:44.642  Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23
00:02:44.642  Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so
00:02:44.642  Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23
00:02:44.642  Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so
00:02:44.642  Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23
00:02:44.642  Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so
00:02:44.642  Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23
00:02:44.642  Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so
00:02:44.642  './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so'
00:02:44.642  './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23'
00:02:44.642  './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0'
00:02:44.642  './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so'
00:02:44.642  './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23'
00:02:44.642  './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0'
00:02:44.642  './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so'
00:02:44.642  './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23'
00:02:44.642  './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0'
00:02:44.642  './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so'
00:02:44.642  './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23'
00:02:44.642  './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0'
00:02:44.642  Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23
00:02:44.642  Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so
00:02:44.642  Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23
00:02:44.642  Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so
00:02:44.642  Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23
00:02:44.642  Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so
00:02:44.642  Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23
00:02:44.642  Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so
00:02:44.642  Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23
00:02:44.642  Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so
00:02:44.642  Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23
00:02:44.642  Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so
00:02:44.642  Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23
00:02:44.642  Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so
00:02:44.642  Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23
00:02:44.642  Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so
00:02:44.642  Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23
00:02:44.642  Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so
00:02:44.642  Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23
00:02:44.642  Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so
00:02:44.642  Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23
00:02:44.642  Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so
00:02:44.642  Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23
00:02:44.642  Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so
00:02:44.642  Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23
00:02:44.642  Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so
00:02:44.642  Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23
00:02:44.642  Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so
00:02:44.642  Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23
00:02:44.642  Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so
00:02:44.642  Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23
00:02:44.642  Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so
00:02:44.642  Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23
00:02:44.642  Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so
00:02:44.642  Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23
00:02:44.642  Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so
00:02:44.642  Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23
00:02:44.642  Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so
00:02:44.642  Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23
00:02:44.642  Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so
00:02:44.642  Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23
00:02:44.642  Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so
00:02:44.642  Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23
00:02:44.642  Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so
00:02:44.642  Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23
00:02:44.642  Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so
00:02:44.642  Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23
00:02:44.642  Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so
00:02:44.642  Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23
00:02:44.642  Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so
00:02:44.642  Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23
00:02:44.643  Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so
00:02:44.643  Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23
00:02:44.643  Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so
00:02:44.643  Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23
00:02:44.643  Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so
00:02:44.643  Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23
00:02:44.643  Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so
00:02:44.643  Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0'
00:02:44.900   05:48:05 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat
00:02:44.900   05:48:05 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:44.900  
00:02:44.900  real	0m57.130s
00:02:44.900  user	6m55.229s
00:02:44.900  sys	0m59.409s
00:02:44.900   05:48:05 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:44.900   05:48:05 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x
00:02:44.900  ************************************
00:02:44.900  END TEST build_native_dpdk
00:02:44.900  ************************************
00:02:44.900   05:48:05  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:44.901   05:48:05  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:44.901   05:48:05  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:44.901   05:48:05  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:44.901   05:48:05  -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:02:44.901   05:48:05  -- spdk/autobuild.sh@58 -- $ unittest_build
00:02:44.901   05:48:05  -- common/autobuild_common.sh@426 -- $ run_test unittest_build _unittest_build
00:02:44.901   05:48:05  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:02:44.901   05:48:05  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:44.901   05:48:05  -- common/autotest_common.sh@10 -- $ set +x
00:02:44.901  ************************************
00:02:44.901  START TEST unittest_build
00:02:44.901  ************************************
00:02:44.901   05:48:05 unittest_build -- common/autotest_common.sh@1129 -- $ _unittest_build
00:02:44.901   05:48:05 unittest_build -- common/autobuild_common.sh@417 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared
00:02:44.901  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:02:44.901  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:02:44.901  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:02:44.901  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:45.466  Using 'verbs' RDMA provider
00:03:01.299  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:03:13.504  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:03:13.504  Creating mk/config.mk...done.
00:03:13.504  Creating mk/cc.flags.mk...done.
00:03:13.504  Type 'make' to build.
00:03:13.504   05:48:33 unittest_build -- common/autobuild_common.sh@418 -- $ make -j10
00:03:13.504  make[1]: Nothing to be done for 'all'.
00:03:45.651    CC lib/ut/ut.o
00:03:45.651    CC lib/log/log.o
00:03:45.651    CC lib/log/log_flags.o
00:03:45.651    CC lib/log/log_deprecated.o
00:03:45.651    CC lib/ut_mock/mock.o
00:03:45.910    LIB libspdk_ut_mock.a
00:03:45.910    LIB libspdk_ut.a
00:03:45.910    LIB libspdk_log.a
00:03:46.167    CC lib/ioat/ioat.o
00:03:46.167    CC lib/dma/dma.o
00:03:46.167    CC lib/util/base64.o
00:03:46.167    CC lib/util/bit_array.o
00:03:46.167    CC lib/util/cpuset.o
00:03:46.167    CC lib/util/crc16.o
00:03:46.167    CC lib/util/crc32.o
00:03:46.167    CC lib/util/crc32c.o
00:03:46.167    CXX lib/trace_parser/trace.o
00:03:46.167    CC lib/vfio_user/host/vfio_user_pci.o
00:03:46.426    CC lib/util/crc32_ieee.o
00:03:46.426    CC lib/vfio_user/host/vfio_user.o
00:03:46.426    CC lib/util/crc64.o
00:03:46.426    CC lib/util/dif.o
00:03:46.426    LIB libspdk_dma.a
00:03:46.426    CC lib/util/fd.o
00:03:46.426    CC lib/util/fd_group.o
00:03:46.426    CC lib/util/file.o
00:03:46.426    CC lib/util/hexlify.o
00:03:46.426    LIB libspdk_ioat.a
00:03:46.426    CC lib/util/iov.o
00:03:46.684    CC lib/util/math.o
00:03:46.684    CC lib/util/net.o
00:03:46.684    CC lib/util/pipe.o
00:03:46.684    LIB libspdk_vfio_user.a
00:03:46.684    CC lib/util/strerror_tls.o
00:03:46.684    CC lib/util/string.o
00:03:46.684    CC lib/util/uuid.o
00:03:46.684    CC lib/util/xor.o
00:03:46.684    CC lib/util/zipf.o
00:03:46.684    CC lib/util/md5.o
00:03:47.251    LIB libspdk_util.a
00:03:47.510    CC lib/idxd/idxd.o
00:03:47.510    CC lib/idxd/idxd_user.o
00:03:47.510    CC lib/idxd/idxd_kernel.o
00:03:47.510    CC lib/rdma_utils/rdma_utils.o
00:03:47.510    CC lib/env_dpdk/env.o
00:03:47.510    CC lib/conf/conf.o
00:03:47.510    CC lib/env_dpdk/memory.o
00:03:47.510    CC lib/vmd/vmd.o
00:03:47.510    CC lib/json/json_parse.o
00:03:47.510    LIB libspdk_trace_parser.a
00:03:47.510    CC lib/json/json_util.o
00:03:47.769    CC lib/json/json_write.o
00:03:47.769    LIB libspdk_conf.a
00:03:47.769    CC lib/vmd/led.o
00:03:47.769    CC lib/env_dpdk/pci.o
00:03:47.769    LIB libspdk_rdma_utils.a
00:03:47.769    CC lib/env_dpdk/init.o
00:03:47.769    CC lib/env_dpdk/threads.o
00:03:47.769    CC lib/env_dpdk/pci_ioat.o
00:03:48.027    CC lib/env_dpdk/pci_virtio.o
00:03:48.027    CC lib/env_dpdk/pci_vmd.o
00:03:48.027    LIB libspdk_json.a
00:03:48.027    CC lib/env_dpdk/pci_idxd.o
00:03:48.027    CC lib/env_dpdk/pci_event.o
00:03:48.027    CC lib/env_dpdk/sigbus_handler.o
00:03:48.027    CC lib/env_dpdk/pci_dpdk.o
00:03:48.286    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:48.286    CC lib/rdma_provider/common.o
00:03:48.286    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:48.286    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:48.286    LIB libspdk_idxd.a
00:03:48.286    LIB libspdk_vmd.a
00:03:48.544    LIB libspdk_rdma_provider.a
00:03:48.544    CC lib/jsonrpc/jsonrpc_server.o
00:03:48.544    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:48.544    CC lib/jsonrpc/jsonrpc_client.o
00:03:48.544    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:48.802    LIB libspdk_jsonrpc.a
00:03:49.062    CC lib/rpc/rpc.o
00:03:49.321    LIB libspdk_rpc.a
00:03:49.321    LIB libspdk_env_dpdk.a
00:03:49.580    CC lib/notify/notify.o
00:03:49.580    CC lib/trace/trace.o
00:03:49.580    CC lib/notify/notify_rpc.o
00:03:49.580    CC lib/trace/trace_flags.o
00:03:49.580    CC lib/trace/trace_rpc.o
00:03:49.580    CC lib/keyring/keyring.o
00:03:49.580    CC lib/keyring/keyring_rpc.o
00:03:49.839    LIB libspdk_notify.a
00:03:50.097    LIB libspdk_keyring.a
00:03:50.097    LIB libspdk_trace.a
00:03:50.356    CC lib/sock/sock_rpc.o
00:03:50.356    CC lib/sock/sock.o
00:03:50.356    CC lib/thread/thread.o
00:03:50.356    CC lib/thread/iobuf.o
00:03:50.923    LIB libspdk_sock.a
00:03:51.182    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:51.182    CC lib/nvme/nvme_ctrlr.o
00:03:51.182    CC lib/nvme/nvme_fabric.o
00:03:51.182    CC lib/nvme/nvme_ns.o
00:03:51.182    CC lib/nvme/nvme_pcie_common.o
00:03:51.182    CC lib/nvme/nvme_pcie.o
00:03:51.182    CC lib/nvme/nvme_ns_cmd.o
00:03:51.182    CC lib/nvme/nvme_qpair.o
00:03:51.182    CC lib/nvme/nvme.o
00:03:52.119    CC lib/nvme/nvme_quirks.o
00:03:52.119    CC lib/nvme/nvme_transport.o
00:03:52.119    CC lib/nvme/nvme_discovery.o
00:03:52.119    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:52.378    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:52.378    CC lib/nvme/nvme_tcp.o
00:03:52.378    LIB libspdk_thread.a
00:03:52.378    CC lib/nvme/nvme_opal.o
00:03:52.378    CC lib/nvme/nvme_io_msg.o
00:03:52.637    CC lib/nvme/nvme_poll_group.o
00:03:52.637    CC lib/nvme/nvme_zns.o
00:03:52.895    CC lib/nvme/nvme_stubs.o
00:03:52.895    CC lib/nvme/nvme_auth.o
00:03:52.895    CC lib/nvme/nvme_cuse.o
00:03:53.154    CC lib/nvme/nvme_rdma.o
00:03:53.154    CC lib/accel/accel.o
00:03:53.412    CC lib/blob/blobstore.o
00:03:53.412    CC lib/blob/request.o
00:03:53.412    CC lib/blob/zeroes.o
00:03:53.671    CC lib/blob/blob_bs_dev.o
00:03:53.930    CC lib/init/json_config.o
00:03:53.930    CC lib/init/subsystem.o
00:03:53.930    CC lib/virtio/virtio.o
00:03:54.189    CC lib/virtio/virtio_vhost_user.o
00:03:54.189    CC lib/virtio/virtio_vfio_user.o
00:03:54.189    CC lib/init/subsystem_rpc.o
00:03:54.189    CC lib/accel/accel_rpc.o
00:03:54.189    CC lib/init/rpc.o
00:03:54.189    CC lib/accel/accel_sw.o
00:03:54.448    CC lib/virtio/virtio_pci.o
00:03:54.448    LIB libspdk_init.a
00:03:54.448    CC lib/fsdev/fsdev_io.o
00:03:54.448    CC lib/fsdev/fsdev.o
00:03:54.448    CC lib/fsdev/fsdev_rpc.o
00:03:54.707    CC lib/event/reactor.o
00:03:54.707    CC lib/event/app.o
00:03:54.707    CC lib/event/log_rpc.o
00:03:54.707    CC lib/event/app_rpc.o
00:03:54.707    LIB libspdk_virtio.a
00:03:54.707    LIB libspdk_accel.a
00:03:54.707    CC lib/event/scheduler_static.o
00:03:54.966    CC lib/bdev/bdev.o
00:03:54.966    LIB libspdk_nvme.a
00:03:54.966    CC lib/bdev/bdev_rpc.o
00:03:54.966    CC lib/bdev/bdev_zone.o
00:03:54.966    CC lib/bdev/part.o
00:03:55.225    CC lib/bdev/scsi_nvme.o
00:03:55.225    LIB libspdk_event.a
00:03:55.483    LIB libspdk_fsdev.a
00:03:55.742    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:03:56.677    LIB libspdk_fuse_dispatcher.a
00:03:57.612    LIB libspdk_blob.a
00:03:57.871    CC lib/lvol/lvol.o
00:03:57.871    CC lib/blobfs/blobfs.o
00:03:57.871    CC lib/blobfs/tree.o
00:03:58.821    LIB libspdk_bdev.a
00:03:58.821    CC lib/ublk/ublk.o
00:03:58.821    CC lib/ublk/ublk_rpc.o
00:03:58.821    CC lib/nvmf/ctrlr.o
00:03:58.821    CC lib/nvmf/ctrlr_discovery.o
00:03:58.821    CC lib/scsi/dev.o
00:03:58.821    CC lib/scsi/lun.o
00:03:58.821    CC lib/nbd/nbd.o
00:03:58.821    CC lib/ftl/ftl_core.o
00:03:59.080    CC lib/ftl/ftl_init.o
00:03:59.080    LIB libspdk_blobfs.a
00:03:59.080    LIB libspdk_lvol.a
00:03:59.080    CC lib/ftl/ftl_layout.o
00:03:59.338    CC lib/ftl/ftl_debug.o
00:03:59.338    CC lib/ftl/ftl_io.o
00:03:59.338    CC lib/scsi/port.o
00:03:59.338    CC lib/ftl/ftl_sb.o
00:03:59.338    CC lib/ftl/ftl_l2p.o
00:03:59.338    CC lib/nbd/nbd_rpc.o
00:03:59.338    CC lib/scsi/scsi.o
00:03:59.597    CC lib/nvmf/ctrlr_bdev.o
00:03:59.597    CC lib/nvmf/subsystem.o
00:03:59.597    CC lib/ftl/ftl_l2p_flat.o
00:03:59.597    CC lib/ftl/ftl_nv_cache.o
00:03:59.597    LIB libspdk_nbd.a
00:03:59.597    CC lib/nvmf/nvmf.o
00:03:59.597    CC lib/ftl/ftl_band.o
00:03:59.597    CC lib/scsi/scsi_bdev.o
00:03:59.597    CC lib/scsi/scsi_pr.o
00:03:59.858    LIB libspdk_ublk.a
00:03:59.858    CC lib/scsi/scsi_rpc.o
00:03:59.858    CC lib/ftl/ftl_band_ops.o
00:03:59.858    CC lib/scsi/task.o
00:04:00.145    CC lib/nvmf/nvmf_rpc.o
00:04:00.145    CC lib/nvmf/transport.o
00:04:00.145    CC lib/nvmf/tcp.o
00:04:00.428    CC lib/ftl/ftl_writer.o
00:04:00.428    LIB libspdk_scsi.a
00:04:00.428    CC lib/ftl/ftl_rq.o
00:04:00.428    CC lib/ftl/ftl_reloc.o
00:04:00.694    CC lib/ftl/ftl_l2p_cache.o
00:04:00.694    CC lib/ftl/ftl_p2l.o
00:04:00.694    CC lib/nvmf/stubs.o
00:04:00.953    CC lib/ftl/ftl_p2l_log.o
00:04:00.953    CC lib/ftl/mngt/ftl_mngt.o
00:04:01.212    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:04:01.212    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:04:01.212    CC lib/ftl/mngt/ftl_mngt_startup.o
00:04:01.212    CC lib/iscsi/conn.o
00:04:01.471    CC lib/iscsi/init_grp.o
00:04:01.471    CC lib/iscsi/iscsi.o
00:04:01.471    CC lib/iscsi/param.o
00:04:01.471    CC lib/iscsi/portal_grp.o
00:04:01.471    CC lib/vhost/vhost.o
00:04:01.471    CC lib/nvmf/mdns_server.o
00:04:01.471    CC lib/nvmf/rdma.o
00:04:01.471    CC lib/ftl/mngt/ftl_mngt_md.o
00:04:01.729    CC lib/iscsi/tgt_node.o
00:04:01.729    CC lib/iscsi/iscsi_subsystem.o
00:04:01.729    CC lib/vhost/vhost_rpc.o
00:04:01.989    CC lib/vhost/vhost_scsi.o
00:04:01.989    CC lib/ftl/mngt/ftl_mngt_misc.o
00:04:02.248    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:04:02.248    CC lib/nvmf/auth.o
00:04:02.248    CC lib/iscsi/iscsi_rpc.o
00:04:02.248    CC lib/iscsi/task.o
00:04:02.508    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:04:02.508    CC lib/ftl/mngt/ftl_mngt_band.o
00:04:02.508    CC lib/vhost/vhost_blk.o
00:04:02.508    CC lib/vhost/rte_vhost_user.o
00:04:02.508    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:04:02.766    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:04:02.766    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:04:02.766    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:04:02.766    CC lib/ftl/utils/ftl_conf.o
00:04:03.025    CC lib/ftl/utils/ftl_md.o
00:04:03.025    CC lib/ftl/utils/ftl_mempool.o
00:04:03.025    CC lib/ftl/utils/ftl_bitmap.o
00:04:03.025    CC lib/ftl/utils/ftl_property.o
00:04:03.285    LIB libspdk_iscsi.a
00:04:03.285    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:04:03.285    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:04:03.285    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:04:03.285    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:04:03.543    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:04:03.544    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:04:03.544    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:04:03.544    CC lib/ftl/upgrade/ftl_sb_v3.o
00:04:03.544    CC lib/ftl/upgrade/ftl_sb_v5.o
00:04:03.544    CC lib/ftl/nvc/ftl_nvc_dev.o
00:04:03.544    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:04:03.544    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:04:03.802    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:04:03.802    CC lib/ftl/base/ftl_base_dev.o
00:04:03.802    CC lib/ftl/base/ftl_base_bdev.o
00:04:03.802    CC lib/ftl/ftl_trace.o
00:04:03.802    LIB libspdk_vhost.a
00:04:04.061    LIB libspdk_ftl.a
00:04:04.628    LIB libspdk_nvmf.a
00:04:04.887    CC module/env_dpdk/env_dpdk_rpc.o
00:04:04.887    CC module/fsdev/aio/fsdev_aio.o
00:04:04.887    CC module/accel/dsa/accel_dsa.o
00:04:04.887    CC module/accel/error/accel_error.o
00:04:04.887    CC module/accel/ioat/accel_ioat.o
00:04:04.887    CC module/keyring/file/keyring.o
00:04:04.887    CC module/keyring/linux/keyring.o
00:04:04.887    CC module/blob/bdev/blob_bdev.o
00:04:04.887    CC module/scheduler/dynamic/scheduler_dynamic.o
00:04:04.887    CC module/sock/posix/posix.o
00:04:04.887    LIB libspdk_env_dpdk_rpc.a
00:04:05.147    CC module/keyring/file/keyring_rpc.o
00:04:05.147    CC module/fsdev/aio/fsdev_aio_rpc.o
00:04:05.147    CC module/keyring/linux/keyring_rpc.o
00:04:05.147    CC module/accel/ioat/accel_ioat_rpc.o
00:04:05.147    LIB libspdk_scheduler_dynamic.a
00:04:05.147    CC module/accel/error/accel_error_rpc.o
00:04:05.147    LIB libspdk_keyring_file.a
00:04:05.406    LIB libspdk_keyring_linux.a
00:04:05.406    CC module/accel/dsa/accel_dsa_rpc.o
00:04:05.406    LIB libspdk_blob_bdev.a
00:04:05.406    CC module/fsdev/aio/linux_aio_mgr.o
00:04:05.406    LIB libspdk_accel_ioat.a
00:04:05.406    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:04:05.406    LIB libspdk_accel_error.a
00:04:05.406    CC module/scheduler/gscheduler/gscheduler.o
00:04:05.406    LIB libspdk_accel_dsa.a
00:04:05.406    CC module/accel/iaa/accel_iaa.o
00:04:05.665    CC module/bdev/delay/vbdev_delay.o
00:04:05.665    CC module/accel/iaa/accel_iaa_rpc.o
00:04:05.665    CC module/bdev/error/vbdev_error.o
00:04:05.665    LIB libspdk_scheduler_dpdk_governor.a
00:04:05.665    LIB libspdk_scheduler_gscheduler.a
00:04:05.665    CC module/bdev/error/vbdev_error_rpc.o
00:04:05.665    CC module/blobfs/bdev/blobfs_bdev.o
00:04:05.665    CC module/bdev/gpt/gpt.o
00:04:05.665    CC module/bdev/gpt/vbdev_gpt.o
00:04:05.665    LIB libspdk_accel_iaa.a
00:04:05.665    CC module/bdev/delay/vbdev_delay_rpc.o
00:04:05.665    CC module/bdev/lvol/vbdev_lvol.o
00:04:05.924    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:04:05.924    LIB libspdk_fsdev_aio.a
00:04:05.924    LIB libspdk_bdev_error.a
00:04:05.924    LIB libspdk_sock_posix.a
00:04:05.924    CC module/bdev/malloc/bdev_malloc.o
00:04:05.924    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:04:05.924    LIB libspdk_blobfs_bdev.a
00:04:05.924    CC module/bdev/nvme/bdev_nvme.o
00:04:05.924    LIB libspdk_bdev_delay.a
00:04:06.183    CC module/bdev/null/bdev_null.o
00:04:06.183    LIB libspdk_bdev_gpt.a
00:04:06.183    CC module/bdev/malloc/bdev_malloc_rpc.o
00:04:06.183    CC module/bdev/passthru/vbdev_passthru.o
00:04:06.183    CC module/bdev/raid/bdev_raid.o
00:04:06.183    CC module/bdev/split/vbdev_split.o
00:04:06.183    CC module/bdev/zone_block/vbdev_zone_block.o
00:04:06.443    CC module/bdev/split/vbdev_split_rpc.o
00:04:06.443    CC module/bdev/null/bdev_null_rpc.o
00:04:06.443    CC module/bdev/raid/bdev_raid_rpc.o
00:04:06.443    LIB libspdk_bdev_malloc.a
00:04:06.443    LIB libspdk_bdev_lvol.a
00:04:06.443    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:04:06.443    CC module/bdev/raid/bdev_raid_sb.o
00:04:06.443    LIB libspdk_bdev_split.a
00:04:06.702    LIB libspdk_bdev_null.a
00:04:06.702    CC module/bdev/aio/bdev_aio.o
00:04:06.702    CC module/bdev/ftl/bdev_ftl.o
00:04:06.702    LIB libspdk_bdev_passthru.a
00:04:06.702    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:04:06.702    CC module/bdev/iscsi/bdev_iscsi.o
00:04:06.702    CC module/bdev/raid/raid0.o
00:04:06.702    CC module/bdev/ftl/bdev_ftl_rpc.o
00:04:06.961    CC module/bdev/virtio/bdev_virtio_scsi.o
00:04:06.961    CC module/bdev/virtio/bdev_virtio_blk.o
00:04:06.961    LIB libspdk_bdev_zone_block.a
00:04:06.961    CC module/bdev/virtio/bdev_virtio_rpc.o
00:04:06.961    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:04:06.961    LIB libspdk_bdev_ftl.a
00:04:06.962    CC module/bdev/raid/raid1.o
00:04:07.221    CC module/bdev/raid/concat.o
00:04:07.221    CC module/bdev/aio/bdev_aio_rpc.o
00:04:07.221    CC module/bdev/nvme/bdev_nvme_rpc.o
00:04:07.221    CC module/bdev/nvme/nvme_rpc.o
00:04:07.221    LIB libspdk_bdev_iscsi.a
00:04:07.221    CC module/bdev/nvme/bdev_mdns_client.o
00:04:07.221    CC module/bdev/nvme/vbdev_opal.o
00:04:07.221    LIB libspdk_bdev_aio.a
00:04:07.481    CC module/bdev/nvme/vbdev_opal_rpc.o
00:04:07.481    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:04:07.481    LIB libspdk_bdev_virtio.a
00:04:07.481    LIB libspdk_bdev_raid.a
00:04:09.389    LIB libspdk_bdev_nvme.a
00:04:09.958    CC module/event/subsystems/iobuf/iobuf.o
00:04:09.958    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:04:09.958    CC module/event/subsystems/keyring/keyring.o
00:04:09.958    CC module/event/subsystems/vmd/vmd.o
00:04:09.958    CC module/event/subsystems/fsdev/fsdev.o
00:04:09.958    CC module/event/subsystems/vmd/vmd_rpc.o
00:04:09.958    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:04:09.958    CC module/event/subsystems/scheduler/scheduler.o
00:04:09.958    CC module/event/subsystems/sock/sock.o
00:04:10.217    LIB libspdk_event_fsdev.a
00:04:10.217    LIB libspdk_event_keyring.a
00:04:10.217    LIB libspdk_event_vhost_blk.a
00:04:10.217    LIB libspdk_event_scheduler.a
00:04:10.217    LIB libspdk_event_vmd.a
00:04:10.217    LIB libspdk_event_sock.a
00:04:10.217    LIB libspdk_event_iobuf.a
00:04:10.476    CC module/event/subsystems/accel/accel.o
00:04:10.736    LIB libspdk_event_accel.a
00:04:10.995    CC module/event/subsystems/bdev/bdev.o
00:04:11.254    LIB libspdk_event_bdev.a
00:04:11.254    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:04:11.254    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:04:11.254    CC module/event/subsystems/scsi/scsi.o
00:04:11.254    CC module/event/subsystems/nbd/nbd.o
00:04:11.254    CC module/event/subsystems/ublk/ublk.o
00:04:11.513    LIB libspdk_event_nbd.a
00:04:11.513    LIB libspdk_event_ublk.a
00:04:11.513    LIB libspdk_event_scsi.a
00:04:11.772    LIB libspdk_event_nvmf.a
00:04:11.772    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:04:11.772    CC module/event/subsystems/iscsi/iscsi.o
00:04:12.031    LIB libspdk_event_iscsi.a
00:04:12.031    LIB libspdk_event_vhost_scsi.a
00:04:12.290    CXX app/trace/trace.o
00:04:12.290    CC app/trace_record/trace_record.o
00:04:12.290    CC app/nvmf_tgt/nvmf_main.o
00:04:12.290    CC examples/interrupt_tgt/interrupt_tgt.o
00:04:12.290    CC app/iscsi_tgt/iscsi_tgt.o
00:04:12.290    CC test/thread/poller_perf/poller_perf.o
00:04:12.290    CC examples/util/zipf/zipf.o
00:04:12.290    CC examples/ioat/perf/perf.o
00:04:12.290    CC test/app/bdev_svc/bdev_svc.o
00:04:12.290    CC test/dma/test_dma/test_dma.o
00:04:12.548    LINK interrupt_tgt
00:04:12.548    LINK poller_perf
00:04:12.548    LINK zipf
00:04:12.548    LINK nvmf_tgt
00:04:12.548    LINK bdev_svc
00:04:12.548    LINK iscsi_tgt
00:04:12.548    LINK spdk_trace_record
00:04:12.807    LINK ioat_perf
00:04:12.807    LINK spdk_trace
00:04:13.066    CC examples/ioat/verify/verify.o
00:04:13.066    LINK test_dma
00:04:13.325    CC app/spdk_lspci/spdk_lspci.o
00:04:13.325    CC test/thread/lock/spdk_lock.o
00:04:13.325    CC app/spdk_tgt/spdk_tgt.o
00:04:13.325    CC examples/thread/thread/thread_ex.o
00:04:13.325    LINK verify
00:04:13.325    LINK spdk_lspci
00:04:13.584    LINK spdk_tgt
00:04:13.584    LINK thread
00:04:13.843    CC examples/sock/hello_world/hello_sock.o
00:04:13.843    TEST_HEADER include/spdk/accel.h
00:04:13.843    TEST_HEADER include/spdk/accel_module.h
00:04:13.843    TEST_HEADER include/spdk/assert.h
00:04:13.843    TEST_HEADER include/spdk/barrier.h
00:04:13.843    TEST_HEADER include/spdk/base64.h
00:04:13.843    TEST_HEADER include/spdk/bdev.h
00:04:13.843    TEST_HEADER include/spdk/bdev_module.h
00:04:13.843    TEST_HEADER include/spdk/bdev_zone.h
00:04:13.843    TEST_HEADER include/spdk/bit_array.h
00:04:13.843    TEST_HEADER include/spdk/bit_pool.h
00:04:13.843    TEST_HEADER include/spdk/blob.h
00:04:13.843    TEST_HEADER include/spdk/blob_bdev.h
00:04:13.843    TEST_HEADER include/spdk/blobfs.h
00:04:13.843    TEST_HEADER include/spdk/blobfs_bdev.h
00:04:13.843    TEST_HEADER include/spdk/conf.h
00:04:13.843    TEST_HEADER include/spdk/config.h
00:04:13.843    TEST_HEADER include/spdk/cpuset.h
00:04:13.843    TEST_HEADER include/spdk/crc16.h
00:04:13.843    TEST_HEADER include/spdk/crc32.h
00:04:13.843    TEST_HEADER include/spdk/crc64.h
00:04:13.843    TEST_HEADER include/spdk/dif.h
00:04:13.843    TEST_HEADER include/spdk/dma.h
00:04:13.843    TEST_HEADER include/spdk/endian.h
00:04:13.843    TEST_HEADER include/spdk/env.h
00:04:13.843    TEST_HEADER include/spdk/env_dpdk.h
00:04:13.843    TEST_HEADER include/spdk/event.h
00:04:13.843    TEST_HEADER include/spdk/fd.h
00:04:13.843    TEST_HEADER include/spdk/fd_group.h
00:04:13.843    TEST_HEADER include/spdk/file.h
00:04:13.843    TEST_HEADER include/spdk/fsdev.h
00:04:13.843    TEST_HEADER include/spdk/fsdev_module.h
00:04:13.843    TEST_HEADER include/spdk/ftl.h
00:04:13.843    TEST_HEADER include/spdk/fuse_dispatcher.h
00:04:13.843    TEST_HEADER include/spdk/gpt_spec.h
00:04:13.843    TEST_HEADER include/spdk/hexlify.h
00:04:14.102    TEST_HEADER include/spdk/histogram_data.h
00:04:14.102    TEST_HEADER include/spdk/idxd.h
00:04:14.102    TEST_HEADER include/spdk/idxd_spec.h
00:04:14.102    TEST_HEADER include/spdk/init.h
00:04:14.102    TEST_HEADER include/spdk/ioat.h
00:04:14.102    TEST_HEADER include/spdk/ioat_spec.h
00:04:14.102    TEST_HEADER include/spdk/iscsi_spec.h
00:04:14.102    TEST_HEADER include/spdk/json.h
00:04:14.102    TEST_HEADER include/spdk/jsonrpc.h
00:04:14.103    TEST_HEADER include/spdk/keyring.h
00:04:14.103    TEST_HEADER include/spdk/keyring_module.h
00:04:14.103    TEST_HEADER include/spdk/likely.h
00:04:14.103    TEST_HEADER include/spdk/log.h
00:04:14.103    TEST_HEADER include/spdk/lvol.h
00:04:14.103    TEST_HEADER include/spdk/md5.h
00:04:14.103    TEST_HEADER include/spdk/memory.h
00:04:14.103    TEST_HEADER include/spdk/mmio.h
00:04:14.103    TEST_HEADER include/spdk/nbd.h
00:04:14.103    TEST_HEADER include/spdk/net.h
00:04:14.103    TEST_HEADER include/spdk/notify.h
00:04:14.103    TEST_HEADER include/spdk/nvme.h
00:04:14.103    TEST_HEADER include/spdk/nvme_intel.h
00:04:14.103    TEST_HEADER include/spdk/nvme_ocssd.h
00:04:14.103    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:04:14.103    TEST_HEADER include/spdk/nvme_spec.h
00:04:14.103    TEST_HEADER include/spdk/nvme_zns.h
00:04:14.103    TEST_HEADER include/spdk/nvmf.h
00:04:14.103    TEST_HEADER include/spdk/nvmf_cmd.h
00:04:14.103    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:04:14.103    TEST_HEADER include/spdk/nvmf_spec.h
00:04:14.103    TEST_HEADER include/spdk/nvmf_transport.h
00:04:14.103    TEST_HEADER include/spdk/opal.h
00:04:14.103    TEST_HEADER include/spdk/opal_spec.h
00:04:14.103    TEST_HEADER include/spdk/pci_ids.h
00:04:14.103    TEST_HEADER include/spdk/pipe.h
00:04:14.103    TEST_HEADER include/spdk/queue.h
00:04:14.103    TEST_HEADER include/spdk/reduce.h
00:04:14.103    TEST_HEADER include/spdk/rpc.h
00:04:14.103    TEST_HEADER include/spdk/scheduler.h
00:04:14.103    TEST_HEADER include/spdk/scsi.h
00:04:14.103    TEST_HEADER include/spdk/scsi_spec.h
00:04:14.103    TEST_HEADER include/spdk/sock.h
00:04:14.103    TEST_HEADER include/spdk/stdinc.h
00:04:14.103    TEST_HEADER include/spdk/string.h
00:04:14.103    TEST_HEADER include/spdk/thread.h
00:04:14.103    TEST_HEADER include/spdk/trace.h
00:04:14.103    TEST_HEADER include/spdk/trace_parser.h
00:04:14.103    TEST_HEADER include/spdk/tree.h
00:04:14.103    TEST_HEADER include/spdk/ublk.h
00:04:14.103    TEST_HEADER include/spdk/util.h
00:04:14.103    TEST_HEADER include/spdk/uuid.h
00:04:14.103    TEST_HEADER include/spdk/version.h
00:04:14.103    TEST_HEADER include/spdk/vfio_user_pci.h
00:04:14.103    TEST_HEADER include/spdk/vfio_user_spec.h
00:04:14.103    TEST_HEADER include/spdk/vhost.h
00:04:14.103    TEST_HEADER include/spdk/vmd.h
00:04:14.103    TEST_HEADER include/spdk/xor.h
00:04:14.103    TEST_HEADER include/spdk/zipf.h
00:04:14.103    CXX test/cpp_headers/accel.o
00:04:14.103    LINK hello_sock
00:04:14.103    CXX test/cpp_headers/accel_module.o
00:04:14.362    CXX test/cpp_headers/assert.o
00:04:14.620    CC test/env/mem_callbacks/mem_callbacks.o
00:04:14.620    CXX test/cpp_headers/barrier.o
00:04:14.620    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:04:14.620    LINK mem_callbacks
00:04:14.879    CXX test/cpp_headers/base64.o
00:04:14.879    CC examples/vmd/lsvmd/lsvmd.o
00:04:14.879    CC examples/idxd/perf/perf.o
00:04:14.879    CC test/env/vtophys/vtophys.o
00:04:14.879    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:04:14.879    LINK lsvmd
00:04:14.879    CXX test/cpp_headers/bdev.o
00:04:14.879    LINK vtophys
00:04:15.137    CXX test/cpp_headers/bdev_module.o
00:04:15.137    LINK env_dpdk_post_init
00:04:15.137    LINK nvme_fuzz
00:04:15.137    LINK idxd_perf
00:04:15.137    CXX test/cpp_headers/bdev_zone.o
00:04:15.396    CC test/env/memory/memory_ut.o
00:04:15.396    CC test/env/pci/pci_ut.o
00:04:15.396    CXX test/cpp_headers/bit_array.o
00:04:15.655    LINK spdk_lock
00:04:15.655    CC examples/nvme/hello_world/hello_world.o
00:04:15.655    CXX test/cpp_headers/bit_pool.o
00:04:15.914    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:04:15.914    CC app/spdk_nvme_perf/perf.o
00:04:15.914    CC examples/vmd/led/led.o
00:04:15.914    CXX test/cpp_headers/blob.o
00:04:15.914    LINK pci_ut
00:04:15.914    LINK hello_world
00:04:15.914    CC app/spdk_nvme_identify/identify.o
00:04:15.914    LINK led
00:04:16.173    CXX test/cpp_headers/blob_bdev.o
00:04:16.173    CC examples/fsdev/hello_world/hello_fsdev.o
00:04:16.173    CC test/app/histogram_perf/histogram_perf.o
00:04:16.432    CXX test/cpp_headers/blobfs.o
00:04:16.432    LINK memory_ut
00:04:16.432    CC test/app/jsoncat/jsoncat.o
00:04:16.432    LINK histogram_perf
00:04:16.432    CXX test/cpp_headers/blobfs_bdev.o
00:04:16.432    LINK hello_fsdev
00:04:16.432    LINK jsoncat
00:04:16.691    CC test/rpc_client/rpc_client_test.o
00:04:16.691    CXX test/cpp_headers/conf.o
00:04:16.691    CXX test/cpp_headers/config.o
00:04:16.691    CC examples/nvme/reconnect/reconnect.o
00:04:16.949    LINK rpc_client_test
00:04:16.949    CXX test/cpp_headers/cpuset.o
00:04:16.949    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:04:16.949    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:04:16.949    LINK spdk_nvme_perf
00:04:16.949    CXX test/cpp_headers/crc16.o
00:04:16.949    LINK spdk_nvme_identify
00:04:16.949    CXX test/cpp_headers/crc32.o
00:04:17.208    CC examples/nvme/nvme_manage/nvme_manage.o
00:04:17.208    CXX test/cpp_headers/crc64.o
00:04:17.208    LINK reconnect
00:04:17.208    CC examples/nvme/arbitration/arbitration.o
00:04:17.208    CC examples/nvme/hotplug/hotplug.o
00:04:17.467    CXX test/cpp_headers/dif.o
00:04:17.467    LINK vhost_fuzz
00:04:17.725    CXX test/cpp_headers/dma.o
00:04:17.725    LINK hotplug
00:04:17.725    LINK arbitration
00:04:17.725    CXX test/cpp_headers/endian.o
00:04:17.725    LINK nvme_manage
00:04:17.984    CC app/spdk_nvme_discover/discovery_aer.o
00:04:17.984    CC app/spdk_top/spdk_top.o
00:04:17.984    CXX test/cpp_headers/env.o
00:04:18.243    CXX test/cpp_headers/env_dpdk.o
00:04:18.243    CXX test/cpp_headers/event.o
00:04:18.243    LINK iscsi_fuzz
00:04:18.243    LINK spdk_nvme_discover
00:04:18.243    CC app/vhost/vhost.o
00:04:18.243    CXX test/cpp_headers/fd.o
00:04:18.500    CC app/spdk_dd/spdk_dd.o
00:04:18.500    CC app/fio/nvme/fio_plugin.o
00:04:18.500    LINK vhost
00:04:18.500    CC examples/nvme/cmb_copy/cmb_copy.o
00:04:18.500    CXX test/cpp_headers/fd_group.o
00:04:18.500    CXX test/cpp_headers/file.o
00:04:18.758    CC examples/accel/perf/accel_perf.o
00:04:18.758    LINK cmb_copy
00:04:18.758    CXX test/cpp_headers/fsdev.o
00:04:19.016    LINK spdk_dd
00:04:19.016    CC test/app/stub/stub.o
00:04:19.016    CXX test/cpp_headers/fsdev_module.o
00:04:19.016    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:04:19.016    CC app/fio/bdev/fio_plugin.o
00:04:19.275    LINK spdk_top
00:04:19.275    LINK spdk_nvme
00:04:19.275    CXX test/cpp_headers/ftl.o
00:04:19.275    LINK stub
00:04:19.275    LINK histogram_ut
00:04:19.533    CXX test/cpp_headers/fuse_dispatcher.o
00:04:19.533    LINK accel_perf
00:04:19.791    CC examples/nvme/abort/abort.o
00:04:19.791    CC test/unit/lib/log/log.c/log_ut.o
00:04:19.791    CXX test/cpp_headers/gpt_spec.o
00:04:19.791    LINK spdk_bdev
00:04:19.791    CC test/unit/lib/rdma/common.c/common_ut.o
00:04:20.049    CXX test/cpp_headers/hexlify.o
00:04:20.049    LINK log_ut
00:04:20.049    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:04:20.049    LINK abort
00:04:20.308    CXX test/cpp_headers/histogram_data.o
00:04:20.308    CXX test/cpp_headers/idxd.o
00:04:20.308    CC examples/blob/hello_world/hello_blob.o
00:04:20.308    LINK pmr_persistence
00:04:20.308    CXX test/cpp_headers/idxd_spec.o
00:04:20.567    CXX test/cpp_headers/init.o
00:04:20.567    CXX test/cpp_headers/ioat.o
00:04:20.567    CC examples/blob/cli/blobcli.o
00:04:20.567    LINK hello_blob
00:04:20.567    CC examples/bdev/hello_world/hello_bdev.o
00:04:20.827    CXX test/cpp_headers/ioat_spec.o
00:04:20.827    CC test/unit/lib/util/base64.c/base64_ut.o
00:04:20.827    LINK common_ut
00:04:20.827    LINK hello_bdev
00:04:20.827    CC test/unit/lib/dma/dma.c/dma_ut.o
00:04:20.827    CXX test/cpp_headers/iscsi_spec.o
00:04:21.090    CC examples/bdev/bdevperf/bdevperf.o
00:04:21.090    LINK base64_ut
00:04:21.090    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:04:21.090    CXX test/cpp_headers/json.o
00:04:21.090    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:04:21.090    LINK blobcli
00:04:21.349    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:04:21.349    CXX test/cpp_headers/jsonrpc.o
00:04:21.607    CXX test/cpp_headers/keyring.o
00:04:21.607    CXX test/cpp_headers/keyring_module.o
00:04:21.607    LINK cpuset_ut
00:04:21.866    CXX test/cpp_headers/likely.o
00:04:21.866    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:04:21.866    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:04:21.866    LINK ioat_ut
00:04:21.866    CXX test/cpp_headers/log.o
00:04:21.866    LINK dma_ut
00:04:22.125    LINK crc16_ut
00:04:22.125    LINK bit_array_ut
00:04:22.125    LINK crc32_ieee_ut
00:04:22.125    LINK bdevperf
00:04:22.383    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:04:22.383    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:04:22.383    CC test/unit/lib/util/dif.c/dif_ut.o
00:04:22.383    CC test/unit/lib/util/file.c/file_ut.o
00:04:22.383    CXX test/cpp_headers/lvol.o
00:04:22.383    CC test/unit/lib/util/iov.c/iov_ut.o
00:04:22.383    LINK crc32c_ut
00:04:22.383    CC test/accel/dif/dif.o
00:04:22.642    CXX test/cpp_headers/md5.o
00:04:22.642    LINK file_ut
00:04:22.642    LINK crc64_ut
00:04:22.900    CC test/unit/lib/util/net.c/net_ut.o
00:04:22.900    CC test/unit/lib/util/math.c/math_ut.o
00:04:22.900    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:04:22.900    CC test/unit/lib/util/string.c/string_ut.o
00:04:23.159    LINK net_ut
00:04:23.159    CXX test/cpp_headers/memory.o
00:04:23.159    LINK math_ut
00:04:23.159    LINK iov_ut
00:04:23.159    CC test/blobfs/mkfs/mkfs.o
00:04:23.417    LINK string_ut
00:04:23.417    CXX test/cpp_headers/mmio.o
00:04:23.417    LINK mkfs
00:04:23.675    LINK dif
00:04:23.675    CC test/event/event_perf/event_perf.o
00:04:23.675    CC test/unit/lib/util/xor.c/xor_ut.o
00:04:23.675    CC test/nvme/aer/aer.o
00:04:23.675    CXX test/cpp_headers/nbd.o
00:04:23.675    CC test/lvol/esnap/esnap.o
00:04:23.675    CXX test/cpp_headers/net.o
00:04:23.675    LINK event_perf
00:04:23.675    LINK pipe_ut
00:04:23.934    CXX test/cpp_headers/notify.o
00:04:23.934    LINK dif_ut
00:04:23.934    CXX test/cpp_headers/nvme.o
00:04:23.934    LINK aer
00:04:24.192    CXX test/cpp_headers/nvme_intel.o
00:04:24.192    CXX test/cpp_headers/nvme_ocssd.o
00:04:24.192    CC examples/nvmf/nvmf/nvmf.o
00:04:24.192    LINK xor_ut
00:04:24.450    CC test/event/reactor/reactor.o
00:04:24.450    CXX test/cpp_headers/nvme_ocssd_spec.o
00:04:24.450    CC test/event/reactor_perf/reactor_perf.o
00:04:24.450    CC test/nvme/reset/reset.o
00:04:24.450    LINK reactor
00:04:24.450    CC test/unit/lib/util/fd_group.c/fd_group_ut.o
00:04:24.450    LINK reactor_perf
00:04:24.709    CXX test/cpp_headers/nvme_spec.o
00:04:24.709    LINK nvmf
00:04:24.709    LINK reset
00:04:24.709    CXX test/cpp_headers/nvme_zns.o
00:04:24.988    CC test/nvme/sgl/sgl.o
00:04:24.988    LINK fd_group_ut
00:04:24.988    CXX test/cpp_headers/nvmf.o
00:04:25.262    CC test/event/app_repeat/app_repeat.o
00:04:25.262    CC test/event/scheduler/scheduler.o
00:04:25.262    LINK sgl
00:04:25.262    CXX test/cpp_headers/nvmf_cmd.o
00:04:25.262    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:04:25.262    LINK app_repeat
00:04:25.528    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:04:25.528    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:04:25.528    CXX test/cpp_headers/nvmf_fc_spec.o
00:04:25.787    LINK scheduler
00:04:25.787    CC test/bdev/bdevio/bdevio.o
00:04:25.787    CXX test/cpp_headers/nvmf_spec.o
00:04:26.045    CC test/nvme/e2edp/nvme_dp.o
00:04:26.045    CXX test/cpp_headers/nvmf_transport.o
00:04:26.304    CC test/nvme/overhead/overhead.o
00:04:26.304    LINK pci_event_ut
00:04:26.304    LINK bdevio
00:04:26.304    CXX test/cpp_headers/opal.o
00:04:26.562    LINK nvme_dp
00:04:26.562    LINK idxd_user_ut
00:04:26.562    CC test/nvme/err_injection/err_injection.o
00:04:26.562    CXX test/cpp_headers/opal_spec.o
00:04:26.562    LINK overhead
00:04:26.820    CC test/nvme/startup/startup.o
00:04:26.820    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:04:26.820    CXX test/cpp_headers/pci_ids.o
00:04:26.820    LINK err_injection
00:04:26.820    LINK startup
00:04:27.079    CXX test/cpp_headers/pipe.o
00:04:27.079    CXX test/cpp_headers/queue.o
00:04:27.079    CXX test/cpp_headers/reduce.o
00:04:27.337    CXX test/cpp_headers/rpc.o
00:04:27.337    CXX test/cpp_headers/scheduler.o
00:04:27.595    CC test/nvme/reserve/reserve.o
00:04:27.595    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:04:27.595    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:04:27.595    CXX test/cpp_headers/scsi.o
00:04:27.595    CC test/nvme/simple_copy/simple_copy.o
00:04:27.853    LINK reserve
00:04:27.853    CC test/nvme/connect_stress/connect_stress.o
00:04:27.853    CXX test/cpp_headers/scsi_spec.o
00:04:28.112    CXX test/cpp_headers/sock.o
00:04:28.112    LINK simple_copy
00:04:28.112    LINK connect_stress
00:04:28.112    LINK idxd_ut
00:04:28.112    CC test/nvme/boot_partition/boot_partition.o
00:04:28.112    CXX test/cpp_headers/stdinc.o
00:04:28.112    LINK json_util_ut
00:04:28.370    CXX test/cpp_headers/string.o
00:04:28.370    LINK json_parse_ut
00:04:28.370    LINK boot_partition
00:04:28.370    CC test/nvme/compliance/nvme_compliance.o
00:04:28.370    LINK json_write_ut
00:04:28.629    CC test/nvme/fused_ordering/fused_ordering.o
00:04:28.629    CXX test/cpp_headers/thread.o
00:04:28.629    CC test/nvme/doorbell_aers/doorbell_aers.o
00:04:28.629    CC test/nvme/fdp/fdp.o
00:04:28.887    CXX test/cpp_headers/trace.o
00:04:28.887    LINK fused_ordering
00:04:28.887    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:04:28.887    CXX test/cpp_headers/trace_parser.o
00:04:28.887    LINK doorbell_aers
00:04:28.887    CC test/nvme/cuse/cuse.o
00:04:28.887    LINK nvme_compliance
00:04:28.887    CXX test/cpp_headers/tree.o
00:04:29.145    CXX test/cpp_headers/ublk.o
00:04:29.145    CXX test/cpp_headers/util.o
00:04:29.145    LINK fdp
00:04:29.145    CXX test/cpp_headers/uuid.o
00:04:29.145    CXX test/cpp_headers/version.o
00:04:29.403    CXX test/cpp_headers/vfio_user_pci.o
00:04:29.403    CXX test/cpp_headers/vfio_user_spec.o
00:04:29.403    LINK jsonrpc_server_ut
00:04:29.403    CXX test/cpp_headers/vhost.o
00:04:29.403    CXX test/cpp_headers/vmd.o
00:04:29.403    CXX test/cpp_headers/xor.o
00:04:29.662    CXX test/cpp_headers/zipf.o
00:04:29.662    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:04:30.597    LINK cuse
00:04:30.856    LINK rpc_ut
00:04:31.114    LINK esnap
00:04:31.373    CC test/unit/lib/sock/posix.c/posix_ut.o
00:04:31.373    CC test/unit/lib/sock/sock.c/sock_ut.o
00:04:31.373    CC test/unit/lib/thread/thread.c/thread_ut.o
00:04:31.373    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:04:31.373    CC test/unit/lib/notify/notify.c/notify_ut.o
00:04:31.373    CC test/unit/lib/keyring/keyring.c/keyring_ut.o
00:04:31.940    LINK keyring_ut
00:04:32.198    LINK notify_ut
00:04:33.134    LINK iobuf_ut
00:04:33.134    LINK posix_ut
00:04:33.701    LINK sock_ut
00:04:34.268    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:04:34.268    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:04:34.268    LINK thread_ut
00:04:34.526    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:04:35.902    LINK nvme_ns_ut
00:04:35.902    LINK nvme_ctrlr_ocssd_cmd_ut
00:04:35.902    LINK nvme_poll_group_ut
00:04:35.902    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:04:36.160    LINK nvme_ctrlr_cmd_ut
00:04:36.160    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:04:36.160    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:04:36.419    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:04:36.419    LINK nvme_qpair_ut
00:04:36.419    LINK nvme_ut
00:04:36.677    LINK nvme_ns_ocssd_cmd_ut
00:04:36.677    LINK nvme_quirks_ut
00:04:36.677    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:04:36.677    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:04:36.935    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:04:36.935    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:04:36.935    LINK nvme_ns_cmd_ut
00:04:37.192    LINK nvme_pcie_ut
00:04:37.192    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:04:37.450    CC test/unit/lib/accel/accel.c/accel_ut.o
00:04:38.016    LINK nvme_transport_ut
00:04:38.275    LINK nvme_io_msg_ut
00:04:38.275    LINK nvme_opal_ut
00:04:38.534    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:04:38.534    LINK nvme_fabric_ut
00:04:38.534    CC test/unit/lib/blob/blob.c/blob_ut.o
00:04:38.534    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:04:38.792    CC test/unit/lib/init/rpc.c/rpc_ut.o
00:04:38.792    LINK nvme_ctrlr_ut
00:04:39.050    LINK nvme_pcie_common_ut
00:04:39.314    CC test/unit/lib/fsdev/fsdev.c/fsdev_ut.o
00:04:39.314    LINK rpc_ut
00:04:39.583    LINK blob_bdev_ut
00:04:39.841    LINK subsystem_ut
00:04:40.100    LINK nvme_cuse_ut
00:04:40.100    LINK nvme_tcp_ut
00:04:40.100    CC test/unit/lib/event/app.c/app_ut.o
00:04:40.100    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:04:40.357    LINK nvme_rdma_ut
00:04:40.924    LINK fsdev_ut
00:04:41.491    LINK accel_ut
00:04:41.491    LINK app_ut
00:04:41.750    LINK reactor_ut
00:04:42.009    CC test/unit/lib/bdev/part.c/part_ut.o
00:04:42.009    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:04:42.009    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:04:42.009    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:04:42.009    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:04:42.009    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:04:42.009    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:04:42.009    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:04:42.268    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:04:42.268    LINK scsi_nvme_ut
00:04:42.268    LINK bdev_zone_ut
00:04:42.526    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:04:42.526    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:04:42.785    LINK gpt_ut
00:04:43.043    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:04:43.610    LINK vbdev_zone_block_ut
00:04:43.610    LINK bdev_raid_sb_ut
00:04:43.868    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:04:43.868    CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o
00:04:43.868    LINK vbdev_lvol_ut
00:04:44.126    LINK concat_ut
00:04:45.062    LINK bdev_raid_ut
00:04:45.062    LINK raid1_ut
00:04:45.062    LINK raid0_ut
00:04:46.966    LINK part_ut
00:04:47.533    LINK bdev_ut
00:04:48.485    LINK blob_ut
00:04:48.743    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:04:48.743    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:04:48.743    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:04:48.743    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:04:48.743    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:04:49.002    LINK blobfs_bdev_ut
00:04:49.002    LINK tree_ut
00:04:49.569    LINK bdev_ut
00:04:49.827    LINK bdev_nvme_ut
00:04:50.394    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:04:50.394    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:04:50.394    CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o
00:04:50.394    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:04:50.394    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:04:50.394    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:04:50.394    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:04:50.652    LINK blobfs_sync_ut
00:04:50.652    LINK blobfs_async_ut
00:04:51.271    LINK ftl_l2p_ut
00:04:51.271    LINK dev_ut
00:04:51.271    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:04:51.271    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:04:51.271    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:04:51.271    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:04:51.529    LINK ftl_bitmap_ut
00:04:51.529    LINK ftl_io_ut
00:04:51.529    LINK lvol_ut
00:04:51.788    LINK lun_ut
00:04:51.788    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:04:52.046    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:04:52.046    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:04:52.046    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:04:52.046    LINK ftl_mempool_ut
00:04:52.046    LINK ftl_p2l_ut
00:04:52.305    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:04:52.305    LINK ftl_mngt_ut
00:04:52.563    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:04:52.563    LINK ftl_band_ut
00:04:52.563    LINK scsi_ut
00:04:52.821    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:04:53.080    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:04:53.080    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:04:53.647    LINK scsi_pr_ut
00:04:53.906    LINK scsi_bdev_ut
00:04:53.906    LINK ftl_sb_ut
00:04:53.906    LINK ftl_layout_upgrade_ut
00:04:53.906    CC test/unit/lib/nvmf/auth.c/auth_ut.o
00:04:54.164    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:04:54.164    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:04:54.164    LINK ctrlr_bdev_ut
00:04:54.422    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:04:54.422    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:04:55.357    LINK nvmf_ut
00:04:55.357    LINK init_grp_ut
00:04:55.357    LINK ctrlr_discovery_ut
00:04:55.357    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:04:55.357    CC test/unit/lib/iscsi/param.c/param_ut.o
00:04:55.616    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:04:55.616    LINK subsystem_ut
00:04:55.874    LINK ctrlr_ut
00:04:56.133    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:04:56.133    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:04:56.133    LINK param_ut
00:04:56.391    LINK tcp_ut
00:04:56.391    LINK auth_ut
00:04:56.391    LINK conn_ut
00:04:57.326    LINK portal_grp_ut
00:04:57.584    LINK tgt_node_ut
00:04:58.519    LINK transport_ut
00:04:58.776    LINK iscsi_ut
00:04:58.776    LINK rdma_ut
00:04:59.712    LINK vhost_ut
00:04:59.971  
00:04:59.971  real	2m15.064s
00:04:59.971  user	10m25.572s
00:04:59.971  sys	2m1.160s
00:04:59.971   05:50:20 unittest_build -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:04:59.971  ************************************
00:04:59.971  END TEST unittest_build
00:04:59.971  ************************************
00:04:59.971   05:50:20 unittest_build -- common/autotest_common.sh@10 -- $ set +x
00:04:59.971   05:50:20  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:04:59.971   05:50:20  -- pm/common@29 -- $ signal_monitor_resources TERM
00:04:59.971   05:50:20  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:04:59.971   05:50:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:59.971   05:50:20  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:04:59.971   05:50:20  -- pm/common@44 -- $ pid=2597
00:04:59.971   05:50:20  -- pm/common@50 -- $ kill -TERM 2597
00:04:59.971   05:50:20  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:04:59.971   05:50:20  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:04:59.971   05:50:20  -- pm/common@44 -- $ pid=2599
00:04:59.971   05:50:20  -- pm/common@50 -- $ kill -TERM 2599
00:04:59.971   05:50:20  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:04:59.971   05:50:20  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:04:59.971    05:50:20  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:04:59.971     05:50:20  -- common/autotest_common.sh@1693 -- # lcov --version
00:04:59.971     05:50:20  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:04:59.971    05:50:20  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:04:59.971    05:50:20  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:59.971    05:50:20  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:59.971    05:50:20  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:59.971    05:50:20  -- scripts/common.sh@336 -- # IFS=.-:
00:04:59.971    05:50:20  -- scripts/common.sh@336 -- # read -ra ver1
00:04:59.971    05:50:20  -- scripts/common.sh@337 -- # IFS=.-:
00:04:59.971    05:50:20  -- scripts/common.sh@337 -- # read -ra ver2
00:04:59.971    05:50:20  -- scripts/common.sh@338 -- # local 'op=<'
00:04:59.971    05:50:20  -- scripts/common.sh@340 -- # ver1_l=2
00:04:59.971    05:50:20  -- scripts/common.sh@341 -- # ver2_l=1
00:04:59.971    05:50:20  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:59.971    05:50:20  -- scripts/common.sh@344 -- # case "$op" in
00:04:59.971    05:50:20  -- scripts/common.sh@345 -- # : 1
00:04:59.971    05:50:20  -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:59.972    05:50:20  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:59.972     05:50:20  -- scripts/common.sh@365 -- # decimal 1
00:05:00.232     05:50:20  -- scripts/common.sh@353 -- # local d=1
00:05:00.232     05:50:20  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:00.232     05:50:20  -- scripts/common.sh@355 -- # echo 1
00:05:00.232    05:50:20  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:00.232     05:50:20  -- scripts/common.sh@366 -- # decimal 2
00:05:00.232     05:50:20  -- scripts/common.sh@353 -- # local d=2
00:05:00.232     05:50:20  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:00.232     05:50:20  -- scripts/common.sh@355 -- # echo 2
00:05:00.232    05:50:20  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:00.232    05:50:20  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:00.232    05:50:20  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:00.232    05:50:20  -- scripts/common.sh@368 -- # return 0
00:05:00.232    05:50:20  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:00.232    05:50:20  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:00.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.232  		--rc genhtml_branch_coverage=1
00:05:00.232  		--rc genhtml_function_coverage=1
00:05:00.232  		--rc genhtml_legend=1
00:05:00.232  		--rc geninfo_all_blocks=1
00:05:00.232  		--rc geninfo_unexecuted_blocks=1
00:05:00.232  		
00:05:00.232  		'
00:05:00.232    05:50:20  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:00.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.232  		--rc genhtml_branch_coverage=1
00:05:00.232  		--rc genhtml_function_coverage=1
00:05:00.232  		--rc genhtml_legend=1
00:05:00.232  		--rc geninfo_all_blocks=1
00:05:00.232  		--rc geninfo_unexecuted_blocks=1
00:05:00.232  		
00:05:00.232  		'
00:05:00.232    05:50:20  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:00.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.232  		--rc genhtml_branch_coverage=1
00:05:00.232  		--rc genhtml_function_coverage=1
00:05:00.232  		--rc genhtml_legend=1
00:05:00.232  		--rc geninfo_all_blocks=1
00:05:00.232  		--rc geninfo_unexecuted_blocks=1
00:05:00.232  		
00:05:00.232  		'
00:05:00.232    05:50:20  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:00.232  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:00.232  		--rc genhtml_branch_coverage=1
00:05:00.232  		--rc genhtml_function_coverage=1
00:05:00.232  		--rc genhtml_legend=1
00:05:00.232  		--rc geninfo_all_blocks=1
00:05:00.232  		--rc geninfo_unexecuted_blocks=1
00:05:00.232  		
00:05:00.232  		'
00:05:00.232   05:50:20  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:05:00.232     05:50:20  -- nvmf/common.sh@7 -- # uname -s
00:05:00.232    05:50:20  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:00.232    05:50:20  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:00.232    05:50:20  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:00.232    05:50:20  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:00.232    05:50:20  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:00.232    05:50:20  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:00.232    05:50:20  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:00.232    05:50:20  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:00.232    05:50:20  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:00.232     05:50:20  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:00.232    05:50:20  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:05:00.232    05:50:20  -- nvmf/common.sh@18 -- # NVME_HOSTID=7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:05:00.232    05:50:20  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:00.232    05:50:20  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:00.232    05:50:20  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:00.232    05:50:20  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:00.232    05:50:20  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:00.232     05:50:20  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:00.232     05:50:20  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:00.232     05:50:20  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:00.232     05:50:20  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:00.232      05:50:20  -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:00.232      05:50:20  -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:00.232      05:50:20  -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:00.232      05:50:20  -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:00.232      05:50:20  -- paths/export.sh@6 -- # export PATH
00:05:00.232      05:50:20  -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:00.232    05:50:20  -- nvmf/common.sh@51 -- # : 0
00:05:00.232    05:50:20  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:00.232    05:50:20  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:00.232    05:50:20  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:00.232    05:50:20  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:00.232    05:50:20  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:00.232    05:50:20  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:00.232  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:00.232    05:50:20  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:00.232    05:50:20  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:00.232    05:50:20  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:00.232   05:50:20  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:00.232    05:50:20  -- spdk/autotest.sh@32 -- # uname -s
00:05:00.232   05:50:20  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:00.232   05:50:20  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E'
00:05:00.232   05:50:20  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:00.232   05:50:20  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:05:00.232   05:50:20  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:00.232   05:50:20  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:00.232    05:50:21  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:00.232   05:50:21  -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm
00:05:00.232   05:50:21  -- spdk/autotest.sh@48 -- # udevadm_pid=72112
00:05:00.232   05:50:21  -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property
00:05:00.232   05:50:21  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:00.232   05:50:21  -- pm/common@17 -- # local monitor
00:05:00.232   05:50:21  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:00.232   05:50:21  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:00.232   05:50:21  -- pm/common@25 -- # sleep 1
00:05:00.232    05:50:21  -- pm/common@21 -- # date +%s
00:05:00.232    05:50:21  -- pm/common@21 -- # date +%s
00:05:00.232   05:50:21  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731909021
00:05:00.232   05:50:21  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731909021
00:05:00.232  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731909021_collect-vmstat.pm.log
00:05:00.232  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731909021_collect-cpu-load.pm.log
00:05:01.170   05:50:22  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:01.170   05:50:22  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:01.170   05:50:22  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:01.170   05:50:22  -- common/autotest_common.sh@10 -- # set +x
00:05:01.170   05:50:22  -- spdk/autotest.sh@59 -- # create_test_list
00:05:01.170   05:50:22  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:01.170   05:50:22  -- common/autotest_common.sh@10 -- # set +x
00:05:01.170     05:50:22  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:05:01.170    05:50:22  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:05:01.170   05:50:22  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:05:01.170   05:50:22  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:05:01.170   05:50:22  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:05:01.170   05:50:22  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:01.170    05:50:22  -- common/autotest_common.sh@1457 -- # uname
00:05:01.429   05:50:22  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:01.429   05:50:22  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:01.429    05:50:22  -- common/autotest_common.sh@1477 -- # uname
00:05:01.429   05:50:22  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:01.429   05:50:22  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:01.429   05:50:22  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:01.429  lcov: LCOV version 1.15
00:05:01.429   05:50:22  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:05:07.998  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:05:07.998  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:06:04.255   05:51:21  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:04.255   05:51:21  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:04.255   05:51:21  -- common/autotest_common.sh@10 -- # set +x
00:06:04.255   05:51:21  -- spdk/autotest.sh@78 -- # rm -f
00:06:04.255   05:51:21  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:04.256  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:06:04.256  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:06:04.256   05:51:21  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:04.256   05:51:21  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:04.256   05:51:21  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:04.256   05:51:21  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:06:04.256   05:51:21  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:04.256   05:51:21  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:06:04.256   05:51:21  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:04.256   05:51:21  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:04.256   05:51:21  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:04.256   05:51:21  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:04.256   05:51:21  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:04.256   05:51:21  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:04.256   05:51:21  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:04.256   05:51:21  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:04.256   05:51:21  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:04.256  No valid GPT data, bailing
00:06:04.256    05:51:21  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:04.256   05:51:22  -- scripts/common.sh@394 -- # pt=
00:06:04.256   05:51:22  -- scripts/common.sh@395 -- # return 1
00:06:04.256   05:51:22  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:04.256  1+0 records in
00:06:04.256  1+0 records out
00:06:04.256  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456777 s, 230 MB/s
00:06:04.256   05:51:22  -- spdk/autotest.sh@105 -- # sync
00:06:04.256   05:51:22  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:04.256   05:51:22  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:04.256    05:51:22  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:04.256    05:51:23  -- spdk/autotest.sh@111 -- # uname -s
00:06:04.256   05:51:23  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:04.256   05:51:23  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:04.256   05:51:23  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:06:04.256  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:06:04.256  Hugepages
00:06:04.256  node     hugesize     free /  total
00:06:04.256  node0   1048576kB        0 /      0
00:06:04.256  node0      2048kB        0 /      0
00:06:04.256  
00:06:04.256  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:04.256  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:06:04.256  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:06:04.256    05:51:24  -- spdk/autotest.sh@117 -- # uname -s
00:06:04.256   05:51:24  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:04.256   05:51:24  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:04.256   05:51:24  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:04.256  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:06:04.256  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:04.515   05:51:25  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:05.453   05:51:26  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:05.453   05:51:26  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:05.453   05:51:26  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:05.453    05:51:26  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:05.453    05:51:26  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:05.453    05:51:26  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:05.453    05:51:26  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:05.453     05:51:26  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:05.453     05:51:26  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:05.713    05:51:26  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:05.713    05:51:26  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:06:05.713   05:51:26  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:05.972  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:06:05.972  Waiting for block devices as requested
00:06:05.972  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:05.972   05:51:26  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:05.972    05:51:26  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:06:05.972     05:51:26  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:06:05.972     05:51:26  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:06:05.972    05:51:26  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:06:05.972    05:51:26  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]]
00:06:05.972     05:51:26  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:06:05.972    05:51:26  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:05.972   05:51:26  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:05.972   05:51:26  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:05.972    05:51:26  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:05.972    05:51:26  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:05.972    05:51:26  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:05.972   05:51:26  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:05.972   05:51:26  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:05.972   05:51:26  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:05.972    05:51:26  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:05.972    05:51:26  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:05.972    05:51:26  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:05.972   05:51:26  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:05.972   05:51:26  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:05.972   05:51:26  -- common/autotest_common.sh@1543 -- # continue
00:06:05.973   05:51:26  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:05.973   05:51:26  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:05.973   05:51:26  -- common/autotest_common.sh@10 -- # set +x
00:06:06.231   05:51:26  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:06.231   05:51:26  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:06.231   05:51:26  -- common/autotest_common.sh@10 -- # set +x
00:06:06.231   05:51:26  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:06.490  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:06:06.490  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:07.059   05:51:28  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:07.059   05:51:28  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:07.059   05:51:28  -- common/autotest_common.sh@10 -- # set +x
00:06:07.319   05:51:28  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:07.319   05:51:28  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:07.319    05:51:28  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:07.319    05:51:28  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:07.319    05:51:28  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:07.319    05:51:28  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:07.319    05:51:28  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:07.319     05:51:28  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:07.319     05:51:28  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:07.319     05:51:28  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:07.319     05:51:28  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:07.319      05:51:28  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:07.319      05:51:28  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:07.319     05:51:28  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:07.319     05:51:28  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:06:07.319    05:51:28  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:07.319     05:51:28  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:07.319    05:51:28  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:07.319    05:51:28  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:07.319    05:51:28  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:07.319    05:51:28  -- common/autotest_common.sh@1572 -- # return 0
00:06:07.319   05:51:28  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:07.319   05:51:28  -- common/autotest_common.sh@1580 -- # return 0
00:06:07.319   05:51:28  -- spdk/autotest.sh@137 -- # '[' 1 -eq 1 ']'
00:06:07.319   05:51:28  -- spdk/autotest.sh@138 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:07.319   05:51:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:07.319   05:51:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:07.319   05:51:28  -- common/autotest_common.sh@10 -- # set +x
00:06:07.319  ************************************
00:06:07.319  START TEST unittest
00:06:07.319  ************************************
00:06:07.319   05:51:28 unittest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:07.319  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:07.319  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:06:07.319  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:06:07.319  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:07.319  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:06:07.319  + rootdir=/home/vagrant/spdk_repo/spdk
00:06:07.319  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:06:07.319  ++ rpc_py=rpc_cmd
00:06:07.319  ++ set -e
00:06:07.319  ++ shopt -s nullglob
00:06:07.319  ++ shopt -s extglob
00:06:07.319  ++ shopt -s inherit_errexit
00:06:07.319  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:06:07.319  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:06:07.319  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:06:07.319  +++ CONFIG_WPDK_DIR=
00:06:07.319  +++ CONFIG_ASAN=y
00:06:07.319  +++ CONFIG_VBDEV_COMPRESS=n
00:06:07.319  +++ CONFIG_HAVE_EXECINFO_H=y
00:06:07.319  +++ CONFIG_USDT=n
00:06:07.319  +++ CONFIG_CUSTOMOCF=n
00:06:07.319  +++ CONFIG_PREFIX=/usr/local
00:06:07.319  +++ CONFIG_RBD=n
00:06:07.319  +++ CONFIG_LIBDIR=
00:06:07.319  +++ CONFIG_IDXD=y
00:06:07.319  +++ CONFIG_NVME_CUSE=y
00:06:07.319  +++ CONFIG_SMA=n
00:06:07.319  +++ CONFIG_VTUNE=n
00:06:07.319  +++ CONFIG_TSAN=n
00:06:07.319  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:06:07.319  +++ CONFIG_VFIO_USER_DIR=
00:06:07.319  +++ CONFIG_MAX_NUMA_NODES=1
00:06:07.319  +++ CONFIG_PGO_CAPTURE=n
00:06:07.319  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:06:07.319  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:06:07.319  +++ CONFIG_LTO=n
00:06:07.319  +++ CONFIG_ISCSI_INITIATOR=y
00:06:07.319  +++ CONFIG_CET=n
00:06:07.319  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:06:07.319  +++ CONFIG_OCF_PATH=
00:06:07.319  +++ CONFIG_RDMA_SET_TOS=y
00:06:07.319  +++ CONFIG_AIO_FSDEV=y
00:06:07.319  +++ CONFIG_HAVE_ARC4RANDOM=y
00:06:07.319  +++ CONFIG_HAVE_LIBARCHIVE=n
00:06:07.319  +++ CONFIG_UBLK=y
00:06:07.319  +++ CONFIG_ISAL_CRYPTO=y
00:06:07.319  +++ CONFIG_OPENSSL_PATH=
00:06:07.319  +++ CONFIG_OCF=n
00:06:07.319  +++ CONFIG_FUSE=n
00:06:07.319  +++ CONFIG_VTUNE_DIR=
00:06:07.319  +++ CONFIG_FUZZER_LIB=
00:06:07.319  +++ CONFIG_FUZZER=n
00:06:07.319  +++ CONFIG_FSDEV=y
00:06:07.319  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:06:07.319  +++ CONFIG_CRYPTO=n
00:06:07.319  +++ CONFIG_PGO_USE=n
00:06:07.319  +++ CONFIG_VHOST=y
00:06:07.319  +++ CONFIG_DAOS=n
00:06:07.319  +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:06:07.319  +++ CONFIG_DAOS_DIR=
00:06:07.319  +++ CONFIG_UNIT_TESTS=y
00:06:07.319  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:06:07.319  +++ CONFIG_VIRTIO=y
00:06:07.319  +++ CONFIG_DPDK_UADK=n
00:06:07.319  +++ CONFIG_COVERAGE=y
00:06:07.319  +++ CONFIG_RDMA=y
00:06:07.319  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:06:07.319  +++ CONFIG_HAVE_LZ4=n
00:06:07.319  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:06:07.319  +++ CONFIG_URING_PATH=
00:06:07.319  +++ CONFIG_XNVME=n
00:06:07.319  +++ CONFIG_VFIO_USER=n
00:06:07.319  +++ CONFIG_ARCH=native
00:06:07.319  +++ CONFIG_HAVE_EVP_MAC=y
00:06:07.319  +++ CONFIG_URING_ZNS=n
00:06:07.319  +++ CONFIG_WERROR=y
00:06:07.319  +++ CONFIG_HAVE_LIBBSD=n
00:06:07.319  +++ CONFIG_UBSAN=y
00:06:07.319  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:06:07.319  +++ CONFIG_IPSEC_MB_DIR=
00:06:07.319  +++ CONFIG_GOLANG=n
00:06:07.319  +++ CONFIG_ISAL=y
00:06:07.319  +++ CONFIG_IDXD_KERNEL=y
00:06:07.319  +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:07.319  +++ CONFIG_RDMA_PROV=verbs
00:06:07.319  +++ CONFIG_APPS=y
00:06:07.319  +++ CONFIG_SHARED=n
00:06:07.319  +++ CONFIG_HAVE_KEYUTILS=y
00:06:07.319  +++ CONFIG_FC_PATH=
00:06:07.319  +++ CONFIG_DPDK_PKG_CONFIG=n
00:06:07.319  +++ CONFIG_FC=n
00:06:07.319  +++ CONFIG_AVAHI=n
00:06:07.319  +++ CONFIG_FIO_PLUGIN=y
00:06:07.319  +++ CONFIG_RAID5F=n
00:06:07.319  +++ CONFIG_EXAMPLES=y
00:06:07.319  +++ CONFIG_TESTS=y
00:06:07.319  +++ CONFIG_CRYPTO_MLX5=n
00:06:07.319  +++ CONFIG_MAX_LCORES=128
00:06:07.319  +++ CONFIG_IPSEC_MB=n
00:06:07.319  +++ CONFIG_PGO_DIR=
00:06:07.319  +++ CONFIG_DEBUG=y
00:06:07.319  +++ CONFIG_DPDK_COMPRESSDEV=n
00:06:07.319  +++ CONFIG_CROSS_PREFIX=
00:06:07.319  +++ CONFIG_COPY_FILE_RANGE=y
00:06:07.319  +++ CONFIG_URING=n
00:06:07.319  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:06:07.319  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:06:07.319  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:06:07.319  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:06:07.319  +++ _root=/home/vagrant/spdk_repo/spdk
00:06:07.319  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:06:07.319  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:06:07.319  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:06:07.319  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:06:07.320  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:06:07.320  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:06:07.320  +++ VHOST_APP=("$_app_dir/vhost")
00:06:07.320  +++ DD_APP=("$_app_dir/spdk_dd")
00:06:07.320  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:06:07.320  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:06:07.320  +++ [[ #ifndef SPDK_CONFIG_H
00:06:07.320  #define SPDK_CONFIG_H
00:06:07.320  #define SPDK_CONFIG_AIO_FSDEV 1
00:06:07.320  #define SPDK_CONFIG_APPS 1
00:06:07.320  #define SPDK_CONFIG_ARCH native
00:06:07.320  #define SPDK_CONFIG_ASAN 1
00:06:07.320  #undef SPDK_CONFIG_AVAHI
00:06:07.320  #undef SPDK_CONFIG_CET
00:06:07.320  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:06:07.320  #define SPDK_CONFIG_COVERAGE 1
00:06:07.320  #define SPDK_CONFIG_CROSS_PREFIX 
00:06:07.320  #undef SPDK_CONFIG_CRYPTO
00:06:07.320  #undef SPDK_CONFIG_CRYPTO_MLX5
00:06:07.320  #undef SPDK_CONFIG_CUSTOMOCF
00:06:07.320  #undef SPDK_CONFIG_DAOS
00:06:07.320  #define SPDK_CONFIG_DAOS_DIR 
00:06:07.320  #define SPDK_CONFIG_DEBUG 1
00:06:07.320  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:06:07.320  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:06:07.320  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:06:07.320  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:06:07.320  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:06:07.320  #undef SPDK_CONFIG_DPDK_UADK
00:06:07.320  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:06:07.320  #define SPDK_CONFIG_EXAMPLES 1
00:06:07.320  #undef SPDK_CONFIG_FC
00:06:07.320  #define SPDK_CONFIG_FC_PATH 
00:06:07.320  #define SPDK_CONFIG_FIO_PLUGIN 1
00:06:07.320  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:06:07.320  #define SPDK_CONFIG_FSDEV 1
00:06:07.320  #undef SPDK_CONFIG_FUSE
00:06:07.320  #undef SPDK_CONFIG_FUZZER
00:06:07.320  #define SPDK_CONFIG_FUZZER_LIB 
00:06:07.320  #undef SPDK_CONFIG_GOLANG
00:06:07.320  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:06:07.320  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:06:07.320  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:06:07.320  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:06:07.320  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:06:07.320  #undef SPDK_CONFIG_HAVE_LIBBSD
00:06:07.320  #undef SPDK_CONFIG_HAVE_LZ4
00:06:07.320  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:06:07.320  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:06:07.320  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:06:07.320  #define SPDK_CONFIG_IDXD 1
00:06:07.320  #define SPDK_CONFIG_IDXD_KERNEL 1
00:06:07.320  #undef SPDK_CONFIG_IPSEC_MB
00:06:07.320  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:06:07.320  #define SPDK_CONFIG_ISAL 1
00:06:07.320  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:06:07.320  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:06:07.320  #define SPDK_CONFIG_LIBDIR 
00:06:07.320  #undef SPDK_CONFIG_LTO
00:06:07.320  #define SPDK_CONFIG_MAX_LCORES 128
00:06:07.320  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:06:07.320  #define SPDK_CONFIG_NVME_CUSE 1
00:06:07.320  #undef SPDK_CONFIG_OCF
00:06:07.320  #define SPDK_CONFIG_OCF_PATH 
00:06:07.320  #define SPDK_CONFIG_OPENSSL_PATH 
00:06:07.320  #undef SPDK_CONFIG_PGO_CAPTURE
00:06:07.320  #define SPDK_CONFIG_PGO_DIR 
00:06:07.320  #undef SPDK_CONFIG_PGO_USE
00:06:07.320  #define SPDK_CONFIG_PREFIX /usr/local
00:06:07.320  #undef SPDK_CONFIG_RAID5F
00:06:07.320  #undef SPDK_CONFIG_RBD
00:06:07.320  #define SPDK_CONFIG_RDMA 1
00:06:07.320  #define SPDK_CONFIG_RDMA_PROV verbs
00:06:07.320  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:06:07.320  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:06:07.320  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:06:07.320  #undef SPDK_CONFIG_SHARED
00:06:07.320  #undef SPDK_CONFIG_SMA
00:06:07.320  #define SPDK_CONFIG_TESTS 1
00:06:07.320  #undef SPDK_CONFIG_TSAN
00:06:07.320  #define SPDK_CONFIG_UBLK 1
00:06:07.320  #define SPDK_CONFIG_UBSAN 1
00:06:07.320  #define SPDK_CONFIG_UNIT_TESTS 1
00:06:07.320  #undef SPDK_CONFIG_URING
00:06:07.320  #define SPDK_CONFIG_URING_PATH 
00:06:07.320  #undef SPDK_CONFIG_URING_ZNS
00:06:07.320  #undef SPDK_CONFIG_USDT
00:06:07.320  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:06:07.320  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:06:07.320  #undef SPDK_CONFIG_VFIO_USER
00:06:07.320  #define SPDK_CONFIG_VFIO_USER_DIR 
00:06:07.320  #define SPDK_CONFIG_VHOST 1
00:06:07.320  #define SPDK_CONFIG_VIRTIO 1
00:06:07.320  #undef SPDK_CONFIG_VTUNE
00:06:07.320  #define SPDK_CONFIG_VTUNE_DIR 
00:06:07.320  #define SPDK_CONFIG_WERROR 1
00:06:07.320  #define SPDK_CONFIG_WPDK_DIR 
00:06:07.320  #undef SPDK_CONFIG_XNVME
00:06:07.320  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:06:07.320  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:06:07.320  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:07.320  +++ shopt -s extglob
00:06:07.320  +++ [[ -e /bin/wpdk_common.sh ]]
00:06:07.320  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:07.320  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:07.320  ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:07.320  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:07.320  ++++ PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:07.320  ++++ PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:07.320  ++++ export PATH
00:06:07.320  ++++ echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:07.320  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:06:07.320  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:06:07.320  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:06:07.320  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:06:07.320  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:06:07.320  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:06:07.320  +++ TEST_TAG=N/A
00:06:07.320  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:06:07.320  +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:06:07.320  ++++ uname -s
00:06:07.320  +++ PM_OS=Linux
00:06:07.320  +++ MONITOR_RESOURCES_SUDO=()
00:06:07.320  +++ declare -A MONITOR_RESOURCES_SUDO
00:06:07.320  +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:06:07.320  +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:06:07.320  +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:06:07.320  +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:06:07.320  +++ SUDO[0]=
00:06:07.320  +++ SUDO[1]='sudo -E'
00:06:07.320  +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:06:07.320  +++ [[ Linux == FreeBSD ]]
00:06:07.320  +++ [[ Linux == Linux ]]
00:06:07.320  +++ [[ QEMU != QEMU ]]
00:06:07.320  +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:06:07.320  ++ : 1
00:06:07.320  ++ export RUN_NIGHTLY
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_RUN_VALGRIND
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_TEST_UNITTEST
00:06:07.320  ++ :
00:06:07.320  ++ export SPDK_TEST_AUTOBUILD
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_RELEASE_BUILD
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_ISAL
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_ISCSI
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_ISCSI_INITIATOR
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_TEST_NVME
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVME_PMR
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVME_BP
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVME_CLI
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVME_CUSE
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVME_FDP
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_NVMF
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_VFIOUSER
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_VFIOUSER_QEMU
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_FUZZER
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_FUZZER_SHORT
00:06:07.320  ++ : rdma
00:06:07.320  ++ export SPDK_TEST_NVMF_TRANSPORT
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_RBD
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_VHOST
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_TEST_BLOCKDEV
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_RAID
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_IOAT
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_BLOBFS
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_VHOST_INIT
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_LVOL
00:06:07.320  ++ : 0
00:06:07.320  ++ export SPDK_TEST_VBDEV_COMPRESS
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_RUN_ASAN
00:06:07.320  ++ : 1
00:06:07.320  ++ export SPDK_RUN_UBSAN
00:06:07.320  ++ : /home/vagrant/spdk_repo/dpdk/build
00:06:07.321  ++ export SPDK_RUN_EXTERNAL_DPDK
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_RUN_NON_ROOT
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_CRYPTO
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_FTL
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_OCF
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_VMD
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_OPAL
00:06:07.321  ++ : v22.11.4
00:06:07.321  ++ export SPDK_TEST_NATIVE_DPDK
00:06:07.321  ++ : true
00:06:07.321  ++ export SPDK_AUTOTEST_X
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_URING
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_USDT
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_USE_IGB_UIO
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_SCHEDULER
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_SCANBUILD
00:06:07.321  ++ :
00:06:07.321  ++ export SPDK_TEST_NVMF_NICS
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_SMA
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_DAOS
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_XNVME
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_ACCEL
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_ACCEL_DSA
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_ACCEL_IAA
00:06:07.321  ++ :
00:06:07.321  ++ export SPDK_TEST_FUZZER_TARGET
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_NVMF_MDNS
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_JSONRPC_GO_CLIENT
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_SETUP
00:06:07.321  ++ : 0
00:06:07.321  ++ export SPDK_TEST_NVME_INTERRUPT
00:06:07.321  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:06:07.321  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:06:07.321  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:07.321  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:07.321  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:07.321  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:07.321  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:07.321  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:07.321  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:06:07.321  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:06:07.321  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:06:07.321  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:06:07.321  ++ export PYTHONDONTWRITEBYTECODE=1
00:06:07.321  ++ PYTHONDONTWRITEBYTECODE=1
00:06:07.321  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:06:07.321  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:06:07.321  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:06:07.321  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:06:07.321  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:06:07.321  ++ rm -rf /var/tmp/asan_suppression_file
00:06:07.321  ++ cat
00:06:07.321  ++ echo leak:libfuse3.so
00:06:07.321  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:06:07.321  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:06:07.321  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:06:07.321  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:06:07.321  ++ '[' -z /var/spdk/dependencies ']'
00:06:07.321  ++ export DEPENDENCY_DIR
00:06:07.321  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:06:07.321  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:06:07.321  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:06:07.321  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:06:07.321  ++ export QEMU_BIN=
00:06:07.321  ++ QEMU_BIN=
00:06:07.321  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:06:07.321  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:06:07.321  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:06:07.321  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:06:07.321  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:07.321  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:07.321  ++ _LCOV_MAIN=0
00:06:07.321  ++ _LCOV_LLVM=1
00:06:07.321  ++ _LCOV=
00:06:07.321  ++ [[ '' == *clang* ]]
00:06:07.321  ++ [[ 0 -eq 1 ]]
00:06:07.321  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:06:07.321  ++ _lcov_opt[_LCOV_MAIN]=
00:06:07.321  ++ lcov_opt=
00:06:07.321  ++ '[' 0 -eq 0 ']'
00:06:07.321  ++ export valgrind=
00:06:07.321  ++ valgrind=
00:06:07.321  +++ uname -s
00:06:07.321  ++ '[' Linux = Linux ']'
00:06:07.321  ++ HUGEMEM=4096
00:06:07.321  ++ export CLEAR_HUGE=yes
00:06:07.321  ++ CLEAR_HUGE=yes
00:06:07.321  ++ MAKE=make
00:06:07.321  +++ nproc
00:06:07.321  ++ MAKEFLAGS=-j10
00:06:07.321  ++ export HUGEMEM=4096
00:06:07.321  ++ HUGEMEM=4096
00:06:07.321  ++ NO_HUGE=()
00:06:07.321  ++ TEST_MODE=
00:06:07.321  ++ [[ -z '' ]]
00:06:07.321  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:06:07.321  ++ exec
00:06:07.321  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:06:07.321  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:06:07.321  ++ set_test_storage 2147483648
00:06:07.321  ++ [[ -v testdir ]]
00:06:07.321  ++ local requested_size=2147483648
00:06:07.321  ++ local mount target_dir
00:06:07.321  ++ local -A mounts fss sizes avails uses
00:06:07.321  ++ local source fs size avail mount use
00:06:07.321  ++ local storage_fallback storage_candidates
00:06:07.321  +++ mktemp -udt spdk.XXXXXX
00:06:07.321  ++ storage_fallback=/tmp/spdk.4VUP0J
00:06:07.321  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:06:07.321  ++ [[ -n '' ]]
00:06:07.321  ++ [[ -n '' ]]
00:06:07.321  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.4VUP0J/tests/unit /tmp/spdk.4VUP0J
00:06:07.321  ++ requested_size=2214592512
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  +++ df -T
00:06:07.321  +++ grep -v Filesystem
00:06:07.321  ++ mounts["$mount"]=tmpfs
00:06:07.321  ++ fss["$mount"]=tmpfs
00:06:07.321  ++ avails["$mount"]=1252945920
00:06:07.321  ++ sizes["$mount"]=1254023168
00:06:07.321  ++ uses["$mount"]=1077248
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=/dev/vda1
00:06:07.321  ++ fss["$mount"]=ext4
00:06:07.321  ++ avails["$mount"]=8913395712
00:06:07.321  ++ sizes["$mount"]=19681529856
00:06:07.321  ++ uses["$mount"]=10751356928
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=tmpfs
00:06:07.321  ++ fss["$mount"]=tmpfs
00:06:07.321  ++ avails["$mount"]=6270115840
00:06:07.321  ++ sizes["$mount"]=6270115840
00:06:07.321  ++ uses["$mount"]=0
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=tmpfs
00:06:07.321  ++ fss["$mount"]=tmpfs
00:06:07.321  ++ avails["$mount"]=5242880
00:06:07.321  ++ sizes["$mount"]=5242880
00:06:07.321  ++ uses["$mount"]=0
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=/dev/vda16
00:06:07.321  ++ fss["$mount"]=ext4
00:06:07.321  ++ avails["$mount"]=777306112
00:06:07.321  ++ sizes["$mount"]=923156480
00:06:07.321  ++ uses["$mount"]=81207296
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=/dev/vda15
00:06:07.321  ++ fss["$mount"]=vfat
00:06:07.321  ++ avails["$mount"]=103000064
00:06:07.321  ++ sizes["$mount"]=109395968
00:06:07.321  ++ uses["$mount"]=6395904
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=tmpfs
00:06:07.321  ++ fss["$mount"]=tmpfs
00:06:07.321  ++ avails["$mount"]=1254010880
00:06:07.321  ++ sizes["$mount"]=1254023168
00:06:07.321  ++ uses["$mount"]=12288
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:06:07.321  ++ fss["$mount"]=fuse.sshfs
00:06:07.321  ++ avails["$mount"]=97253007360
00:06:07.321  ++ sizes["$mount"]=105088212992
00:06:07.321  ++ uses["$mount"]=2449772544
00:06:07.321  ++ read -r source fs size use avail _ mount
00:06:07.321  ++ printf '* Looking for test storage...\n'
00:06:07.321  * Looking for test storage...
00:06:07.321  ++ local target_space new_size
00:06:07.321  ++ for target_dir in "${storage_candidates[@]}"
00:06:07.321  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:06:07.321  +++ awk '$1 !~ /Filesystem/{print $6}'
00:06:07.321  ++ mount=/
00:06:07.321  ++ target_space=8913395712
00:06:07.321  ++ (( target_space == 0 || target_space < requested_size ))
00:06:07.321  ++ (( target_space >= requested_size ))
00:06:07.321  ++ [[ ext4 == tmpfs ]]
00:06:07.321  ++ [[ ext4 == ramfs ]]
00:06:07.321  ++ [[ / == / ]]
00:06:07.321  ++ new_size=12965949440
00:06:07.321  ++ (( new_size * 100 / sizes[/] > 95 ))
00:06:07.321  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:06:07.322  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:06:07.322  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:06:07.322  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:06:07.322  ++ return 0
00:06:07.322  ++ set -o errtrace
00:06:07.322  ++ shopt -s extdebug
00:06:07.322  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:06:07.322  ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@1685 -- # true
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@1687 -- # xtrace_fd
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@29 -- # exec
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@31 -- # xtrace_restore
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@18 -- # set -x
00:06:07.322    05:51:28 unittest -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:07.322     05:51:28 unittest -- common/autotest_common.sh@1693 -- # lcov --version
00:06:07.322     05:51:28 unittest -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:07.579    05:51:28 unittest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:07.579    05:51:28 unittest -- scripts/common.sh@336 -- # IFS=.-:
00:06:07.579    05:51:28 unittest -- scripts/common.sh@336 -- # read -ra ver1
00:06:07.579    05:51:28 unittest -- scripts/common.sh@337 -- # IFS=.-:
00:06:07.579    05:51:28 unittest -- scripts/common.sh@337 -- # read -ra ver2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@338 -- # local 'op=<'
00:06:07.579    05:51:28 unittest -- scripts/common.sh@340 -- # ver1_l=2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@341 -- # ver2_l=1
00:06:07.579    05:51:28 unittest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:07.579    05:51:28 unittest -- scripts/common.sh@344 -- # case "$op" in
00:06:07.579    05:51:28 unittest -- scripts/common.sh@345 -- # : 1
00:06:07.579    05:51:28 unittest -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:07.579    05:51:28 unittest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:07.579     05:51:28 unittest -- scripts/common.sh@365 -- # decimal 1
00:06:07.579     05:51:28 unittest -- scripts/common.sh@353 -- # local d=1
00:06:07.579     05:51:28 unittest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:07.579     05:51:28 unittest -- scripts/common.sh@355 -- # echo 1
00:06:07.579    05:51:28 unittest -- scripts/common.sh@365 -- # ver1[v]=1
00:06:07.579     05:51:28 unittest -- scripts/common.sh@366 -- # decimal 2
00:06:07.579     05:51:28 unittest -- scripts/common.sh@353 -- # local d=2
00:06:07.579     05:51:28 unittest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:07.579     05:51:28 unittest -- scripts/common.sh@355 -- # echo 2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@366 -- # ver2[v]=2
00:06:07.579    05:51:28 unittest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:07.579    05:51:28 unittest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:07.579    05:51:28 unittest -- scripts/common.sh@368 -- # return 0
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:07.579  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.579  		--rc genhtml_branch_coverage=1
00:06:07.579  		--rc genhtml_function_coverage=1
00:06:07.579  		--rc genhtml_legend=1
00:06:07.579  		--rc geninfo_all_blocks=1
00:06:07.579  		--rc geninfo_unexecuted_blocks=1
00:06:07.579  		
00:06:07.579  		'
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:07.579  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.579  		--rc genhtml_branch_coverage=1
00:06:07.579  		--rc genhtml_function_coverage=1
00:06:07.579  		--rc genhtml_legend=1
00:06:07.579  		--rc geninfo_all_blocks=1
00:06:07.579  		--rc geninfo_unexecuted_blocks=1
00:06:07.579  		
00:06:07.579  		'
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:07.579  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.579  		--rc genhtml_branch_coverage=1
00:06:07.579  		--rc genhtml_function_coverage=1
00:06:07.579  		--rc genhtml_legend=1
00:06:07.579  		--rc geninfo_all_blocks=1
00:06:07.579  		--rc geninfo_unexecuted_blocks=1
00:06:07.579  		
00:06:07.579  		'
00:06:07.579    05:51:28 unittest -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:07.579  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:07.579  		--rc genhtml_branch_coverage=1
00:06:07.579  		--rc genhtml_function_coverage=1
00:06:07.579  		--rc genhtml_legend=1
00:06:07.579  		--rc geninfo_all_blocks=1
00:06:07.579  		--rc geninfo_unexecuted_blocks=1
00:06:07.579  		
00:06:07.579  		'
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@159 -- # '[' 0 -eq 1 ']'
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@166 -- # '[' -z x ']'
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@173 -- # '[' 0 -eq 1 ']'
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@182 -- # [[ y == y ]]
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@183 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@184 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:06:07.579   05:51:28 unittest -- unit/unittest.sh@186 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:06:14.168  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:14.168  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:07:10.386    05:52:28 unittest -- unit/unittest.sh@190 -- # uname -m
00:07:10.386   05:52:28 unittest -- unit/unittest.sh@190 -- # '[' x86_64 = aarch64 ']'
00:07:10.386   05:52:28 unittest -- unit/unittest.sh@194 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:10.386  ************************************
00:07:10.386  START TEST unittest_pci_event
00:07:10.386  ************************************
00:07:10.386   05:52:28 unittest.unittest_pci_event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:10.386  
00:07:10.386  
00:07:10.386       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.386       http://cunit.sourceforge.net/
00:07:10.386  
00:07:10.386  
00:07:10.386  Suite: pci_event
00:07:10.386    Test: test_pci_parse_event ...[2024-11-18 05:52:28.592702] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000
00:07:10.386  passed
00:07:10.386  
00:07:10.386  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.386                suites      1      1    n/a      0        0
00:07:10.386                 tests      1      1      1      0        0
00:07:10.386               asserts     15     15     15      0      n/a
00:07:10.386  
00:07:10.386  Elapsed time =    0.001 seconds
00:07:10.386  [2024-11-18 05:52:28.593154] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000
00:07:10.386  
00:07:10.386  real	0m0.042s
00:07:10.386  user	0m0.018s
00:07:10.386  sys	0m0.018s
00:07:10.386   05:52:28 unittest.unittest_pci_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.386   05:52:28 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x
00:07:10.386  ************************************
00:07:10.386  END TEST unittest_pci_event
00:07:10.386  ************************************
00:07:10.386   05:52:28 unittest -- unit/unittest.sh@195 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:10.386  ************************************
00:07:10.386  START TEST unittest_include
00:07:10.386  ************************************
00:07:10.386   05:52:28 unittest.unittest_include -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:10.386  
00:07:10.386  
00:07:10.386       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.386       http://cunit.sourceforge.net/
00:07:10.386  
00:07:10.386  
00:07:10.386  Suite: histogram
00:07:10.386    Test: histogram_test ...passed
00:07:10.386    Test: histogram_merge ...passed
00:07:10.386  
00:07:10.386  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.386                suites      1      1    n/a      0        0
00:07:10.386                 tests      2      2      2      0        0
00:07:10.386               asserts     50     50     50      0      n/a
00:07:10.386  
00:07:10.386  Elapsed time =    0.005 seconds
00:07:10.386  
00:07:10.386  real	0m0.029s
00:07:10.386  user	0m0.013s
00:07:10.386  sys	0m0.016s
00:07:10.386   05:52:28 unittest.unittest_include -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.386   05:52:28 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x
00:07:10.386  ************************************
00:07:10.386  END TEST unittest_include
00:07:10.386  ************************************
00:07:10.386   05:52:28 unittest -- unit/unittest.sh@196 -- # run_test unittest_bdev unittest_bdev
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.386   05:52:28 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:10.386  ************************************
00:07:10.386  START TEST unittest_bdev
00:07:10.386  ************************************
00:07:10.386   05:52:28 unittest.unittest_bdev -- common/autotest_common.sh@1129 -- # unittest_bdev
00:07:10.386   05:52:28 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:07:10.386  
00:07:10.386  
00:07:10.386       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.386       http://cunit.sourceforge.net/
00:07:10.386  
00:07:10.386  
00:07:10.386  Suite: bdev
00:07:10.386    Test: bytes_to_blocks_test ...passed
00:07:10.386    Test: num_blocks_test ...passed
00:07:10.386    Test: io_valid_test ...passed
00:07:10.386    Test: open_write_test ...[2024-11-18 05:52:28.793919] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:07:10.386  [2024-11-18 05:52:28.794199] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:07:10.386  [2024-11-18 05:52:28.794275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:07:10.386  passed
00:07:10.386    Test: claim_test ...passed
00:07:10.386    Test: alias_add_del_test ...[2024-11-18 05:52:28.836560] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4691:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:07:10.386  [2024-11-18 05:52:28.836645] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4721:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:07:10.386  [2024-11-18 05:52:28.836676] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4691:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:07:10.386  passed
00:07:10.386    Test: get_device_stat_test ...passed
00:07:10.387    Test: bdev_io_types_test ...passed
00:07:10.387    Test: bdev_io_wait_test ...passed
00:07:10.387    Test: bdev_io_spans_split_test ...passed
00:07:10.387    Test: bdev_io_boundary_split_test ...passed
00:07:10.387    Test: bdev_io_max_size_and_segment_split_test ...[2024-11-18 05:52:28.935042] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3273:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:07:10.387  passed
00:07:10.387    Test: bdev_io_mix_split_test ...passed
00:07:10.387    Test: bdev_io_split_with_io_wait ...passed
00:07:10.387    Test: bdev_io_write_unit_split_test ...[2024-11-18 05:52:29.036853] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2816:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:10.387  [2024-11-18 05:52:29.037024] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2816:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:10.387  [2024-11-18 05:52:29.037077] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2816:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:07:10.387  [2024-11-18 05:52:29.037134] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2816:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:07:10.387  passed
00:07:10.387    Test: bdev_io_alignment_with_boundary ...passed
00:07:10.387    Test: bdev_io_alignment ...passed
00:07:10.387    Test: bdev_histograms ...passed
00:07:10.387    Test: bdev_write_zeroes ...passed
00:07:10.387    Test: bdev_compare_and_write ...passed
00:07:10.387    Test: bdev_compare ...passed
00:07:10.387    Test: bdev_compare_emulated ...passed
00:07:10.387    Test: bdev_zcopy_write ...passed
00:07:10.387    Test: bdev_zcopy_read ...passed
00:07:10.387    Test: bdev_open_while_hotremove ...passed
00:07:10.387    Test: bdev_close_while_hotremove ...passed
00:07:10.387    Test: bdev_open_ext_test ...[2024-11-18 05:52:29.313983] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8304:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:07:10.387  passed
00:07:10.387    Test: bdev_open_ext_unregister ...[2024-11-18 05:52:29.314184] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8304:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:07:10.387  passed
00:07:10.387    Test: bdev_set_io_timeout ...passed
00:07:10.387    Test: bdev_set_qd_sampling ...passed
00:07:10.387    Test: lba_range_overlap ...passed
00:07:10.387    Test: lock_lba_range_check_ranges ...passed
00:07:10.387    Test: lock_lba_range_with_io_outstanding ...passed
00:07:10.387    Test: lock_lba_range_overlapped ...passed
00:07:10.387    Test: bdev_quiesce ...[2024-11-18 05:52:29.413390] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10283:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:07:10.387  passed
00:07:10.387    Test: bdev_io_abort ...passed
00:07:10.387    Test: bdev_unmap ...passed
00:07:10.387    Test: bdev_write_zeroes_split_test ...passed
00:07:10.387    Test: bdev_set_options_test ...passed
00:07:10.387    Test: bdev_get_memory_domains ...passed
00:07:10.387    Test: bdev_io_ext ...[2024-11-18 05:52:29.483030] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 505:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:10.387  passed
00:07:10.387    Test: bdev_io_ext_no_opts ...passed
00:07:10.387    Test: bdev_io_ext_invalid_opts ...passed
00:07:10.387    Test: bdev_io_ext_split ...passed
00:07:10.387    Test: bdev_io_ext_bounce_buffer ...passed
00:07:10.387    Test: bdev_register_uuid_alias ...[2024-11-18 05:52:29.589780] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4691:bdev_name_add: *ERROR*: Bdev name ad243219-b0bb-4dc2-948f-acb847f303e3 already exists
00:07:10.387  [2024-11-18 05:52:29.589855] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:ad243219-b0bb-4dc2-948f-acb847f303e3 alias for bdev bdev0
00:07:10.387  passed
00:07:10.387    Test: bdev_unregister_by_name ...[2024-11-18 05:52:29.603936] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8094:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:07:10.387  passed
00:07:10.387    Test: for_each_bdev_test ...[2024-11-18 05:52:29.603987] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8102:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:07:10.387  passed
00:07:10.387    Test: bdev_seek_test ...passed
00:07:10.387    Test: bdev_copy ...passed
00:07:10.387    Test: bdev_copy_split_test ...passed
00:07:10.387    Test: examine_locks ...passed
00:07:10.387    Test: claim_v2_rwo ...[2024-11-18 05:52:29.660849] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  passed
00:07:10.387    Test: claim_v2_rom ...[2024-11-18 05:52:29.660927] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8838:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.660948] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.660969] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.660990] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661020] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8833:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:07:10.387  [2024-11-18 05:52:29.661154] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:10.387  passed
00:07:10.387    Test: claim_v2_rwm ...[2024-11-18 05:52:29.661185] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661203] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661249] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8876:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:07:10.387  [2024-11-18 05:52:29.661272] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8871:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:10.387  [2024-11-18 05:52:29.661370] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:10.387  [2024-11-18 05:52:29.661411] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8198:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661439] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661453] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661483] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8926:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:07:10.387  passed
00:07:10.387    Test: claim_v2_existing_writer ...[2024-11-18 05:52:29.661544] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8906:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:10.387  [2024-11-18 05:52:29.661657] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8871:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:10.387  passed
00:07:10.387    Test: claim_v2_existing_v1 ...[2024-11-18 05:52:29.661678] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8871:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:10.387  [2024-11-18 05:52:29.661806] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661830] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661845] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:10.387  passed
00:07:10.387    Test: claim_v1_existing_v2 ...passed
00:07:10.387    Test: examine_claimed ...[2024-11-18 05:52:29.661941] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661969] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.661993] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8675:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:10.387  [2024-11-18 05:52:29.662259] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9003:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:07:10.387  passed
00:07:10.387    Test: get_numa_id ...passed
00:07:10.387    Test: get_device_stat_with_reset ...passed
00:07:10.387  
00:07:10.387  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.387                suites      1      1    n/a      0        0
00:07:10.387                 tests     61     61     61      0        0
00:07:10.387               asserts   4643   4643   4643      0      n/a
00:07:10.387  
00:07:10.387  Elapsed time =    0.916 seconds
00:07:10.387   05:52:29 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:07:10.387  
00:07:10.387  
00:07:10.387       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.387       http://cunit.sourceforge.net/
00:07:10.387  
00:07:10.387  
00:07:10.387  Suite: nvme
00:07:10.387    Test: test_create_ctrlr ...passed
00:07:10.387    Test: test_reset_ctrlr ...[2024-11-18 05:52:29.730327] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.387  passed
00:07:10.387    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:07:10.387    Test: test_failover_ctrlr ...passed
00:07:10.387    Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-18 05:52:29.732879] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.387  [2024-11-18 05:52:29.733072] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.387  [2024-11-18 05:52:29.733254] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.387  passed
00:07:10.388    Test: test_pending_reset ...[2024-11-18 05:52:29.735133] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.735405] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_attach_ctrlr ...[2024-11-18 05:52:29.736454] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:07:10.388  passed
00:07:10.388    Test: test_aer_cb ...passed
00:07:10.388    Test: test_submit_nvme_cmd ...passed
00:07:10.388    Test: test_add_remove_trid ...passed
00:07:10.388    Test: test_abort ...[2024-11-18 05:52:29.739629] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7859:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:07:10.388  passed
00:07:10.388    Test: test_get_io_qpair ...passed
00:07:10.388    Test: test_bdev_unregister ...passed
00:07:10.388    Test: test_compare_ns ...passed
00:07:10.388    Test: test_init_ana_log_page ...passed
00:07:10.388    Test: test_get_memory_domains ...passed
00:07:10.388    Test: test_reconnect_qpair ...[2024-11-18 05:52:29.742014] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 17] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_create_bdev_ctrlr ...[2024-11-18 05:52:29.742451] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5755:bdev_nvme_check_multipath: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 18] cntlid 18 are duplicated.
00:07:10.388  passed
00:07:10.388    Test: test_add_multi_ns_to_bdev ...[2024-11-18 05:52:29.743621] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4912:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:07:10.388  passed
00:07:10.388    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:07:10.388    Test: test_admin_path ...passed
00:07:10.388    Test: test_reset_bdev_ctrlr ...[2024-11-18 05:52:29.748030] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.748256] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.748392] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.748787] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749088] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749216] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749527] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749644] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749910] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.749964] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.750103] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.750141] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_find_io_path ...passed
00:07:10.388    Test: test_retry_io_if_ana_state_is_updating ...passed
00:07:10.388    Test: test_retry_io_for_io_path_error ...passed
00:07:10.388    Test: test_retry_io_count ...passed
00:07:10.388    Test: test_concurrent_read_ana_log_page ...passed
00:07:10.388    Test: test_retry_io_for_ana_error ...passed
00:07:10.388    Test: test_check_io_error_resiliency_params ...[2024-11-18 05:52:29.753303] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6501:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:07:10.388  [2024-11-18 05:52:29.753347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6505:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:10.388  [2024-11-18 05:52:29.753380] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6514:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:10.388  [2024-11-18 05:52:29.753399] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6517:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:07:10.388  [2024-11-18 05:52:29.753418] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6529:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:10.388  passed
00:07:10.388    Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-18 05:52:29.753435] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6529:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:10.388  [2024-11-18 05:52:29.753454] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6509:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:07:10.388  [2024-11-18 05:52:29.753470] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6524:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:07:10.388  [2024-11-18 05:52:29.753488] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6521:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:07:10.388  passed
00:07:10.388    Test: test_reconnect_ctrlr ...[2024-11-18 05:52:29.754177] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.754296] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.754529] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.754630] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.754694] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_retry_failover_ctrlr ...[2024-11-18 05:52:29.755031] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_fail_path ...[2024-11-18 05:52:29.755569] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.755732] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.755857] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.755952] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.756032] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_nvme_ns_cmp ...passed
00:07:10.388    Test: test_ana_transition ...passed
00:07:10.388    Test: test_set_preferred_path ...passed
00:07:10.388    Test: test_find_next_io_path ...passed
00:07:10.388    Test: test_find_io_path_min_qd ...passed
00:07:10.388    Test: test_disable_auto_failback ...[2024-11-18 05:52:29.757741] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 45] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_set_multipath_policy ...passed
00:07:10.388    Test: test_uuid_generation ...passed
00:07:10.388    Test: test_retry_io_to_same_path ...passed
00:07:10.388    Test: test_race_between_reset_and_disconnected ...passed
00:07:10.388    Test: test_ctrlr_op_rpc ...passed
00:07:10.388    Test: test_bdev_ctrlr_op_rpc ...passed
00:07:10.388    Test: test_disable_enable_ctrlr ...[2024-11-18 05:52:29.761320] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  [2024-11-18 05:52:29.761437] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:10.388  passed
00:07:10.388    Test: test_delete_ctrlr_done ...passed
00:07:10.388    Test: test_ns_remove_during_reset ...passed
00:07:10.388    Test: test_io_path_is_current ...passed
00:07:10.388    Test: test_bdev_reset_abort_io ...passed
00:07:10.388    Test: test_race_between_clear_pending_resets_and_reset_ctrlr_complete ...passed
00:07:10.388  
00:07:10.388  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.388                suites      1      1    n/a      0        0
00:07:10.388                 tests     51     51     51      0        0
00:07:10.388               asserts   4017   4017   4017      0      n/a
00:07:10.388  
00:07:10.388  Elapsed time =    0.035 seconds
00:07:10.388   05:52:29 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:07:10.388  
00:07:10.388  
00:07:10.388       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.388       http://cunit.sourceforge.net/
00:07:10.388  
00:07:10.388  Test Options
00:07:10.388  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:07:10.388  
00:07:10.388  Suite: raid
00:07:10.388    Test: test_create_raid ...passed
00:07:10.388    Test: test_create_raid_superblock ...passed
00:07:10.388    Test: test_delete_raid ...passed
00:07:10.388    Test: test_create_raid_invalid_args ...[2024-11-18 05:52:29.807268] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1521:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:07:10.388  [2024-11-18 05:52:29.807701] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1515:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:07:10.388  [2024-11-18 05:52:29.808530] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1505:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:07:10.389  [2024-11-18 05:52:29.808756] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:10.389  [2024-11-18 05:52:29.808858] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:10.389  [2024-11-18 05:52:29.809962] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:10.389  [2024-11-18 05:52:29.810033] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:10.389  passed
00:07:10.389    Test: test_delete_raid_invalid_args ...passed
00:07:10.389    Test: test_io_channel ...passed
00:07:10.389    Test: test_reset_io ...passed
00:07:10.389    Test: test_multi_raid ...passed
00:07:10.389    Test: test_io_type_supported ...passed
00:07:10.389    Test: test_raid_json_dump_info ...passed
00:07:10.389    Test: test_context_size ...passed
00:07:10.389    Test: test_raid_level_conversions ...passed
00:07:10.389    Test: test_raid_io_split ...passed
00:07:10.389    Test: test_raid_process ...passed
00:07:10.389    Test: test_raid_process_with_qos ...passed
00:07:10.389  
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      1      1    n/a      0        0
00:07:10.389                 tests     15     15     15      0        0
00:07:10.389               asserts   6602   6602   6602      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.030 seconds
00:07:10.389   05:52:29 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: raid_sb
00:07:10.389    Test: test_raid_bdev_write_superblock ...passed
00:07:10.389    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:10.389    Test: test_raid_bdev_parse_superblock ...[2024-11-18 05:52:29.868253] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:10.389  passed
00:07:10.389  Suite: raid_sb_md
00:07:10.389    Test: test_raid_bdev_write_superblock ...passed
00:07:10.389    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:10.389    Test: test_raid_bdev_parse_superblock ...passed[2024-11-18 05:52:29.868658] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:10.389  
00:07:10.389  Suite: raid_sb_md_interleaved
00:07:10.389    Test: test_raid_bdev_write_superblock ...passed
00:07:10.389    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:10.389    Test: test_raid_bdev_parse_superblock ...passed
00:07:10.389  
00:07:10.389  [2024-11-18 05:52:29.869041] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      3      3    n/a      0        0
00:07:10.389                 tests      9      9      9      0        0
00:07:10.389               asserts    139    139    139      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.002 seconds
00:07:10.389   05:52:29 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: concat
00:07:10.389    Test: test_concat_start ...passed
00:07:10.389    Test: test_concat_rw ...passed
00:07:10.389    Test: test_concat_null_payload ...passed
00:07:10.389  
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      1      1    n/a      0        0
00:07:10.389                 tests      3      3      3      0        0
00:07:10.389               asserts   8460   8460   8460      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.010 seconds
00:07:10.389   05:52:29 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: raid0
00:07:10.389    Test: test_write_io ...passed
00:07:10.389    Test: test_read_io ...passed
00:07:10.389    Test: test_unmap_io ...passed
00:07:10.389    Test: test_io_failure ...passed
00:07:10.389  Suite: raid0_dif
00:07:10.389    Test: test_write_io ...passed
00:07:10.389    Test: test_read_io ...passed
00:07:10.389    Test: test_unmap_io ...passed
00:07:10.389    Test: test_io_failure ...passed
00:07:10.389  
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      2      2    n/a      0        0
00:07:10.389                 tests      8      8      8      0        0
00:07:10.389               asserts 368291 368291 368291      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.103 seconds
00:07:10.389   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: raid1
00:07:10.389    Test: test_raid1_start ...passed
00:07:10.389    Test: test_raid1_read_balancing ...passed
00:07:10.389    Test: test_raid1_write_error ...passed
00:07:10.389    Test: test_raid1_read_error ...passed
00:07:10.389  
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      1      1    n/a      0        0
00:07:10.389                 tests      4      4      4      0        0
00:07:10.389               asserts   4374   4374   4374      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.005 seconds
00:07:10.389   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: zone
00:07:10.389    Test: test_zone_get_operation ...passed
00:07:10.389    Test: test_bdev_zone_get_info ...passed
00:07:10.389    Test: test_bdev_zone_management ...passed
00:07:10.389    Test: test_bdev_zone_append ...passed
00:07:10.389    Test: test_bdev_zone_append_with_md ...passed
00:07:10.389    Test: test_bdev_zone_appendv ...passed
00:07:10.389    Test: test_bdev_zone_appendv_with_md ...passed
00:07:10.389    Test: test_bdev_io_get_append_location ...passed
00:07:10.389  
00:07:10.389  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.389                suites      1      1    n/a      0        0
00:07:10.389                 tests      8      8      8      0        0
00:07:10.389               asserts     94     94     94      0      n/a
00:07:10.389  
00:07:10.389  Elapsed time =    0.001 seconds
00:07:10.389   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:07:10.389  
00:07:10.389  
00:07:10.389       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.389       http://cunit.sourceforge.net/
00:07:10.389  
00:07:10.389  
00:07:10.389  Suite: gpt_parse
00:07:10.389    Test: test_parse_mbr_and_primary ...[2024-11-18 05:52:30.152976] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:10.389  [2024-11-18 05:52:30.153178] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:10.389  [2024-11-18 05:52:30.153238] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:10.389  [2024-11-18 05:52:30.153262] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:10.389  [2024-11-18 05:52:30.153296] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:10.389  [2024-11-18 05:52:30.153338] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:10.389  passed
00:07:10.389    Test: test_parse_secondary ...[2024-11-18 05:52:30.153969] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:10.389  [2024-11-18 05:52:30.153995] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:10.389  [2024-11-18 05:52:30.154054] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:10.389  [2024-11-18 05:52:30.154074] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:10.389  passed
00:07:10.389    Test: test_check_mbr ...[2024-11-18 05:52:30.154687] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:10.389  passed
00:07:10.389    Test: test_read_header ...[2024-11-18 05:52:30.154728] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:10.389  [2024-11-18 05:52:30.154871] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:07:10.389  [2024-11-18 05:52:30.154904] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:07:10.389  [2024-11-18 05:52:30.154943] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:07:10.389  [2024-11-18 05:52:30.154974] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:07:10.389  passed
00:07:10.389    Test: test_read_partitions ...[2024-11-18 05:52:30.155008] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:07:10.389  [2024-11-18 05:52:30.155030] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:07:10.389  [2024-11-18 05:52:30.155099] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:07:10.389  [2024-11-18 05:52:30.155118] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:07:10.390  [2024-11-18 05:52:30.155157] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:07:10.390  [2024-11-18 05:52:30.155177] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:07:10.390  [2024-11-18 05:52:30.155440] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:07:10.390  passed
00:07:10.390  
00:07:10.390  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.390                suites      1      1    n/a      0        0
00:07:10.390                 tests      5      5      5      0        0
00:07:10.390               asserts     33     33     33      0      n/a
00:07:10.390  
00:07:10.390  Elapsed time =    0.003 seconds
00:07:10.390   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:07:10.390  
00:07:10.390  
00:07:10.390       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.390       http://cunit.sourceforge.net/
00:07:10.390  
00:07:10.390  
00:07:10.390  Suite: bdev_part
00:07:10.390    Test: part_test ...[2024-11-18 05:52:30.195380] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4691:bdev_name_add: *ERROR*: Bdev name 82aa18cf-1874-5001-adc3-13a6cedbe4a9 already exists
00:07:10.390  [2024-11-18 05:52:30.195642] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7842:bdev_register: *ERROR*: Unable to add uuid:82aa18cf-1874-5001-adc3-13a6cedbe4a9 alias for bdev test1
00:07:10.390  passed
00:07:10.390    Test: part_free_test ...passed
00:07:10.390    Test: part_get_io_channel_test ...passed
00:07:10.390    Test: part_construct_ext ...passed
00:07:10.390  
00:07:10.390  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.390                suites      1      1    n/a      0        0
00:07:10.390                 tests      4      4      4      0        0
00:07:10.390               asserts     48     48     48      0      n/a
00:07:10.390  
00:07:10.390  Elapsed time =    0.046 seconds
00:07:10.390   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:07:10.390  
00:07:10.390  
00:07:10.390       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.390       http://cunit.sourceforge.net/
00:07:10.390  
00:07:10.390  
00:07:10.390  Suite: scsi_nvme_suite
00:07:10.390    Test: scsi_nvme_translate_test ...passed
00:07:10.390  
00:07:10.390  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.390                suites      1      1    n/a      0        0
00:07:10.390                 tests      1      1      1      0        0
00:07:10.390               asserts    104    104    104      0      n/a
00:07:10.390  
00:07:10.390  Elapsed time =    0.000 seconds
00:07:10.390   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:07:10.390  
00:07:10.390  
00:07:10.390       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.390       http://cunit.sourceforge.net/
00:07:10.390  
00:07:10.390  
00:07:10.390  Suite: lvol
00:07:10.390    Test: ut_lvs_init ...[2024-11-18 05:52:30.314089] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:07:10.390  [2024-11-18 05:52:30.314427] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_init ...passed
00:07:10.390    Test: ut_lvol_snapshot ...passed
00:07:10.390    Test: ut_lvol_clone ...passed
00:07:10.390    Test: ut_lvs_destroy ...passed
00:07:10.390    Test: ut_lvs_unload ...passed
00:07:10.390    Test: ut_lvol_resize ...[2024-11-18 05:52:30.316070] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_set_read_only ...passed
00:07:10.390    Test: ut_lvol_hotremove ...passed
00:07:10.390    Test: ut_vbdev_lvol_get_io_channel ...passed
00:07:10.390    Test: ut_vbdev_lvol_io_type_supported ...passed
00:07:10.390    Test: ut_lvol_read_write ...passed
00:07:10.390    Test: ut_vbdev_lvol_submit_request ...passed
00:07:10.390    Test: ut_lvol_examine_config ...passed
00:07:10.390    Test: ut_lvol_examine_disk ...[2024-11-18 05:52:30.316719] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_rename ...[2024-11-18 05:52:30.317734] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:07:10.390  [2024-11-18 05:52:30.317817] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:07:10.390  passed
00:07:10.390    Test: ut_bdev_finish ...passed
00:07:10.390    Test: ut_lvs_rename ...passed
00:07:10.390    Test: ut_lvol_seek ...passed
00:07:10.390    Test: ut_esnap_dev_create ...[2024-11-18 05:52:30.318432] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:07:10.390  [2024-11-18 05:52:30.318483] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_esnap_clone_bad_args ...[2024-11-18 05:52:30.318530] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:07:10.390  [2024-11-18 05:52:30.318665] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:07:10.390  [2024-11-18 05:52:30.318707] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_shallow_copy ...[2024-11-18 05:52:30.319028] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:10.390  passed
00:07:10.390    Test: ut_lvol_set_external_parent ...[2024-11-18 05:52:30.319089] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL
00:07:10.390  passed
00:07:10.390  
00:07:10.390  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.390                suites      1      1    n/a      0        0
00:07:10.390                 tests     23     23     23      0        0
00:07:10.390               asserts    770    770    770      0      n/a
00:07:10.390  
00:07:10.390  Elapsed time =    0.005 seconds[2024-11-18 05:52:30.319200] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:10.390  
00:07:10.390   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:07:10.390  
00:07:10.390  
00:07:10.390       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.390       http://cunit.sourceforge.net/
00:07:10.390  
00:07:10.390  
00:07:10.390  Suite: zone_block
00:07:10.390    Test: test_zone_block_create ...passed
00:07:10.390    Test: test_zone_block_create_invalid ...[2024-11-18 05:52:30.376075] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:07:10.390  [2024-11-18 05:52:30.376306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 05:52:30.376450] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:07:10.390  [2024-11-18 05:52:30.376811] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-18 05:52:30.377225] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:07:10.390  [2024-11-18 05:52:30.377287] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-18 05:52:30.377395] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:07:10.390  [2024-11-18 05:52:30.377808] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed
00:07:10.390    Test: test_get_zone_info ...[2024-11-18 05:52:30.379081] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.379170] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.379226] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  passed
00:07:10.390    Test: test_supported_io_types ...passed
00:07:10.390    Test: test_reset_zone ...[2024-11-18 05:52:30.380721] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.380823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  passed
00:07:10.390    Test: test_open_zone ...[2024-11-18 05:52:30.381591] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.382574] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.382647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  passed
00:07:10.390    Test: test_zone_write ...[2024-11-18 05:52:30.383570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:10.390  [2024-11-18 05:52:30.383747] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.383842] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:10.390  [2024-11-18 05:52:30.383873] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.390  [2024-11-18 05:52:30.389712] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:07:10.390  [2024-11-18 05:52:30.389777] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.389849] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:07:10.391  [2024-11-18 05:52:30.389875] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.395625] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:10.391  [2024-11-18 05:52:30.395690] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  passed
00:07:10.391    Test: test_zone_read ...[2024-11-18 05:52:30.396441] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:07:10.391  [2024-11-18 05:52:30.396492] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.396548] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:07:10.391  [2024-11-18 05:52:30.396582] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.397290] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:07:10.391  [2024-11-18 05:52:30.397360] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  passed
00:07:10.391    Test: test_close_zone ...[2024-11-18 05:52:30.398117] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.398213] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.398915] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.398990] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  passed
00:07:10.391    Test: test_finish_zone ...[2024-11-18 05:52:30.400109] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.400217] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  passed
00:07:10.391    Test: test_append_zone ...[2024-11-18 05:52:30.401129] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:10.391  [2024-11-18 05:52:30.401188] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.401277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:10.391  [2024-11-18 05:52:30.401298] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  [2024-11-18 05:52:30.417160] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:10.391  [2024-11-18 05:52:30.417218] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:10.391  passed
00:07:10.391  
00:07:10.391  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.391                suites      1      1    n/a      0        0
00:07:10.391                 tests     11     11     11      0        0
00:07:10.391               asserts   3437   3437   3437      0      n/a
00:07:10.391  
00:07:10.391  Elapsed time =    0.043 seconds
00:07:10.391   05:52:30 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:07:10.391  
00:07:10.391  
00:07:10.391       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.391       http://cunit.sourceforge.net/
00:07:10.391  
00:07:10.391  
00:07:10.391  Suite: bdev
00:07:10.391    Test: basic ...[2024-11-18 05:52:30.506587] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5fc494081001): Operation not permitted (rc=-1)
00:07:10.391  [2024-11-18 05:52:30.506984] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x5130000003c0 (0x5fc494080fc0): Operation not permitted (rc=-1)
00:07:10.391  [2024-11-18 05:52:30.507052] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x5fc494081001): Operation not permitted (rc=-1)
00:07:10.391  passed
00:07:10.391    Test: unregister_and_close ...passed
00:07:10.391    Test: unregister_and_close_different_threads ...passed
00:07:10.391    Test: basic_qos ...passed
00:07:10.391    Test: put_channel_during_reset ...passed
00:07:10.391    Test: aborted_reset ...passed
00:07:10.391    Test: aborted_reset_no_outstanding_io ...passed
00:07:10.391    Test: io_during_reset ...passed
00:07:10.391    Test: reset_completions ...passed
00:07:10.391    Test: io_during_qos_queue ...passed
00:07:10.391    Test: io_during_qos_reset ...passed
00:07:10.391    Test: enomem ...passed
00:07:10.391    Test: enomem_multi_bdev ...passed
00:07:10.391    Test: enomem_multi_bdev_unregister ...passed
00:07:10.391    Test: enomem_multi_io_target ...passed
00:07:10.391    Test: qos_dynamic_enable ...passed
00:07:10.391    Test: bdev_histograms_mt ...passed
00:07:10.391    Test: bdev_set_io_timeout_mt ...[2024-11-18 05:52:31.034200] thread.c: 484:spdk_thread_lib_fini: *ERROR*: io_device 0x5130000003c0 not unregistered
00:07:10.391  passed
00:07:10.391    Test: lock_lba_range_then_submit_io ...[2024-11-18 05:52:31.041554] thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x5fc494080f80 already registered (old:0x5130000003c0 new:0x513000000c80)
00:07:10.391  passed
00:07:10.391    Test: unregister_during_reset ...passed
00:07:10.391    Test: event_notify_and_close ...passed
00:07:10.391    Test: unregister_and_qos_poller ...passed
00:07:10.391  Suite: bdev_wrong_thread
00:07:10.391    Test: spdk_bdev_register_wt ...[2024-11-18 05:52:31.146966] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8632:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x519000001980 (0x519000001980)
00:07:10.391  passed
00:07:10.391    Test: spdk_bdev_examine_wt ...[2024-11-18 05:52:31.147226] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 820:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x519000001980 (0x519000001980)
00:07:10.391  passed
00:07:10.391  
00:07:10.391  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:10.391                suites      2      2    n/a      0        0
00:07:10.391                 tests     24     24     24      0        0
00:07:10.391               asserts    621    621    621      0      n/a
00:07:10.391  
00:07:10.391  Elapsed time =    0.653 seconds
00:07:10.391  
00:07:10.391  real	0m2.428s
00:07:10.391  user	0m1.214s
00:07:10.391  sys	0m1.218s
00:07:10.391   05:52:31 unittest.unittest_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:10.391   05:52:31 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x
00:07:10.391  ************************************
00:07:10.391  END TEST unittest_bdev
00:07:10.391  ************************************
00:07:10.391   05:52:31 unittest -- unit/unittest.sh@197 -- # [[ n == y ]]
00:07:10.391   05:52:31 unittest -- unit/unittest.sh@202 -- # [[ n == y ]]
00:07:10.391   05:52:31 unittest -- unit/unittest.sh@207 -- # [[ n == y ]]
00:07:10.391   05:52:31 unittest -- unit/unittest.sh@211 -- # [[ n == y ]]
00:07:10.391   05:52:31 unittest -- unit/unittest.sh@215 -- # run_test unittest_blob_blobfs unittest_blob
00:07:10.391   05:52:31 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:10.391   05:52:31 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:10.391   05:52:31 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:10.391  ************************************
00:07:10.391  START TEST unittest_blob_blobfs
00:07:10.391  ************************************
00:07:10.391   05:52:31 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1129 -- # unittest_blob
00:07:10.391   05:52:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:07:10.391   05:52:31 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:07:10.391  
00:07:10.391  
00:07:10.391       CUnit - A unit testing framework for C - Version 2.1-3
00:07:10.391       http://cunit.sourceforge.net/
00:07:10.391  
00:07:10.391  
00:07:10.391  Suite: blob_nocopy_noextent
00:07:10.391    Test: blob_init ...[2024-11-18 05:52:31.253756] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:10.391  passed
00:07:10.391    Test: blob_thin_provision ...passed
00:07:10.391    Test: blob_read_only ...passed
00:07:10.392    Test: bs_load ...[2024-11-18 05:52:31.354914] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:10.392  passed
00:07:10.651    Test: bs_load_custom_cluster_size ...passed
00:07:10.651    Test: bs_load_after_failed_grow ...passed
00:07:10.651    Test: bs_cluster_sz ...[2024-11-18 05:52:31.380176] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:10.651  [2024-11-18 05:52:31.380777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:10.651  [2024-11-18 05:52:31.380896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:10.651  passed
00:07:10.651    Test: bs_resize_md ...passed
00:07:10.651    Test: bs_destroy ...passed
00:07:10.651    Test: bs_type ...passed
00:07:10.651    Test: bs_super_block ...passed
00:07:10.651    Test: bs_test_recover_cluster_count ...passed
00:07:10.651    Test: bs_grow_live ...passed
00:07:10.651    Test: bs_grow_live_no_space ...passed
00:07:10.651    Test: bs_test_grow ...passed
00:07:10.651    Test: blob_serialize_test ...passed
00:07:10.651    Test: super_block_crc ...passed
00:07:10.651    Test: blob_thin_prov_write_count_io ...passed
00:07:10.651    Test: blob_thin_prov_unmap_cluster ...passed
00:07:10.651    Test: bs_load_iter_test ...passed
00:07:10.651    Test: blob_relations ...[2024-11-18 05:52:31.558599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.651  [2024-11-18 05:52:31.558721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.651  [2024-11-18 05:52:31.559922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.651  [2024-11-18 05:52:31.559984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.651  passed
00:07:10.651    Test: blob_relations2 ...[2024-11-18 05:52:31.570615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.651  [2024-11-18 05:52:31.570714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.651  [2024-11-18 05:52:31.570759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.651  [2024-11-18 05:52:31.570785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.651  [2024-11-18 05:52:31.572739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.651  [2024-11-18 05:52:31.572811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.652  [2024-11-18 05:52:31.573400] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:10.652  [2024-11-18 05:52:31.573492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.652  passed
00:07:10.652    Test: blob_relations3 ...passed
00:07:10.911    Test: blobstore_clean_power_failure ...passed
00:07:10.911    Test: blob_delete_snapshot_power_failure ...[2024-11-18 05:52:31.667445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:10.911  [2024-11-18 05:52:31.675108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:10.911  [2024-11-18 05:52:31.675185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:10.911  [2024-11-18 05:52:31.675228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.911  [2024-11-18 05:52:31.682632] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:10.911  [2024-11-18 05:52:31.682693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:10.911  [2024-11-18 05:52:31.682731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:10.911  [2024-11-18 05:52:31.682775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.911  [2024-11-18 05:52:31.690661] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:10.911  [2024-11-18 05:52:31.690792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.911  [2024-11-18 05:52:31.699066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:10.911  [2024-11-18 05:52:31.699208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.911  [2024-11-18 05:52:31.707789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:10.911  [2024-11-18 05:52:31.707915] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:10.911  passed
00:07:10.911    Test: blob_create_snapshot_power_failure ...[2024-11-18 05:52:31.732952] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:10.911  [2024-11-18 05:52:31.749494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:10.911  [2024-11-18 05:52:31.758346] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:10.911  passed
00:07:10.911    Test: blob_io_unit ...passed
00:07:10.911    Test: blob_io_unit_compatibility ...passed
00:07:10.911    Test: blob_ext_md_pages ...passed
00:07:10.911    Test: blob_esnap_io_4096_4096 ...passed
00:07:10.911    Test: blob_esnap_io_512_512 ...passed
00:07:10.911    Test: blob_esnap_io_4096_512 ...passed
00:07:11.171    Test: blob_esnap_io_512_4096 ...passed
00:07:11.171    Test: blob_esnap_clone_resize ...passed
00:07:11.171  Suite: blob_bs_nocopy_noextent
00:07:11.171    Test: blob_open ...passed
00:07:11.171    Test: blob_create ...[2024-11-18 05:52:31.961992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:11.171  passed
00:07:11.171    Test: blob_create_loop ...passed
00:07:11.171    Test: blob_create_fail ...[2024-11-18 05:52:32.037404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:11.171  passed
00:07:11.171    Test: blob_create_internal ...passed
00:07:11.171    Test: blob_create_zero_extent ...passed
00:07:11.171    Test: blob_snapshot ...passed
00:07:11.171    Test: blob_clone ...passed
00:07:11.430    Test: blob_inflate ...[2024-11-18 05:52:32.153342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:11.430  passed
00:07:11.430    Test: blob_delete ...passed
00:07:11.430    Test: blob_resize_test ...[2024-11-18 05:52:32.197580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:11.430  passed
00:07:11.430    Test: blob_resize_thin_test ...passed
00:07:11.430    Test: channel_ops ...passed
00:07:11.430    Test: blob_super ...passed
00:07:11.430    Test: blob_rw_verify_iov ...passed
00:07:11.430    Test: blob_unmap ...passed
00:07:11.430    Test: blob_iter ...passed
00:07:11.430    Test: blob_parse_md ...passed
00:07:11.430    Test: bs_load_pending_removal ...passed
00:07:11.430    Test: bs_unload ...[2024-11-18 05:52:32.396748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:11.430  passed
00:07:11.690    Test: bs_usable_clusters ...passed
00:07:11.690    Test: blob_crc ...[2024-11-18 05:52:32.434799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:11.690  [2024-11-18 05:52:32.434918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:11.690  passed
00:07:11.690    Test: blob_flags ...passed
00:07:11.690    Test: bs_version ...passed
00:07:11.690    Test: blob_set_xattrs_test ...[2024-11-18 05:52:32.493450] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:11.690  [2024-11-18 05:52:32.493567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:11.690  passed
00:07:11.690    Test: blob_thin_prov_alloc ...passed
00:07:11.949    Test: blob_insert_cluster_msg_test ...passed
00:07:11.949    Test: blob_thin_prov_rw ...passed
00:07:11.949    Test: blob_thin_prov_rle ...passed
00:07:11.949    Test: blob_thin_prov_rw_iov ...passed
00:07:11.949    Test: blob_snapshot_rw ...passed
00:07:11.949    Test: blob_snapshot_rw_iov ...passed
00:07:12.208    Test: blob_inflate_rw ...passed
00:07:12.208    Test: blob_snapshot_freeze_io ...passed
00:07:12.208    Test: blob_operation_split_rw ...passed
00:07:12.468    Test: blob_operation_split_rw_iov ...passed
00:07:12.468    Test: blob_simultaneous_operations ...[2024-11-18 05:52:33.277929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:12.468  [2024-11-18 05:52:33.278040] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:12.468  [2024-11-18 05:52:33.279411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:12.468  [2024-11-18 05:52:33.279463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:12.468  [2024-11-18 05:52:33.292439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:12.468  [2024-11-18 05:52:33.292530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:12.468  [2024-11-18 05:52:33.292708] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:12.468  [2024-11-18 05:52:33.292730] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:12.468  passed
00:07:12.468    Test: blob_persist_test ...passed
00:07:12.468    Test: blob_decouple_snapshot ...passed
00:07:12.468    Test: blob_seek_io_unit ...passed
00:07:12.468    Test: blob_nested_freezes ...passed
00:07:12.727    Test: blob_clone_resize ...passed
00:07:12.727    Test: blob_shallow_copy ...[2024-11-18 05:52:33.506204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:12.727  [2024-11-18 05:52:33.506520] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:12.727  [2024-11-18 05:52:33.506751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:12.727  passed
00:07:12.727  Suite: blob_blob_nocopy_noextent
00:07:12.727    Test: blob_write ...passed
00:07:12.727    Test: blob_read ...passed
00:07:12.727    Test: blob_rw_verify ...passed
00:07:12.727    Test: blob_rw_verify_iov_nomem ...passed
00:07:12.727    Test: blob_rw_iov_read_only ...passed
00:07:12.727    Test: blob_xattr ...passed
00:07:12.727    Test: blob_dirty_shutdown ...passed
00:07:12.727    Test: blob_is_degraded ...passed
00:07:12.727  Suite: blob_esnap_bs_nocopy_noextent
00:07:12.987    Test: blob_esnap_create ...passed
00:07:12.987    Test: blob_esnap_thread_add_remove ...passed
00:07:12.987    Test: blob_esnap_clone_snapshot ...passed
00:07:12.987    Test: blob_esnap_clone_inflate ...passed
00:07:12.987    Test: blob_esnap_clone_decouple ...passed
00:07:12.987    Test: blob_esnap_clone_reload ...passed
00:07:12.987    Test: blob_esnap_hotplug ...passed
00:07:12.987    Test: blob_set_parent ...[2024-11-18 05:52:33.856322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:12.987  [2024-11-18 05:52:33.856428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:12.987  [2024-11-18 05:52:33.856559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:12.987  [2024-11-18 05:52:33.856604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:12.987  [2024-11-18 05:52:33.857359] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:12.987  passed
00:07:12.987    Test: blob_set_external_parent ...[2024-11-18 05:52:33.878283] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:12.987  [2024-11-18 05:52:33.878389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:12.987  [2024-11-18 05:52:33.878426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:12.987  [2024-11-18 05:52:33.879029] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:12.987  passed
00:07:12.987  Suite: blob_nocopy_extent
00:07:12.987    Test: blob_init ...[2024-11-18 05:52:33.886713] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:12.987  passed
00:07:12.987    Test: blob_thin_provision ...passed
00:07:12.987    Test: blob_read_only ...passed
00:07:12.987    Test: bs_load ...[2024-11-18 05:52:33.916160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:12.987  passed
00:07:12.987    Test: bs_load_custom_cluster_size ...passed
00:07:12.987    Test: bs_load_after_failed_grow ...passed
00:07:12.987    Test: bs_cluster_sz ...[2024-11-18 05:52:33.932617] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:12.987  [2024-11-18 05:52:33.932921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:12.987  [2024-11-18 05:52:33.932966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:12.987  passed
00:07:12.987    Test: bs_resize_md ...passed
00:07:12.987    Test: bs_destroy ...passed
00:07:13.246    Test: bs_type ...passed
00:07:13.246    Test: bs_super_block ...passed
00:07:13.246    Test: bs_test_recover_cluster_count ...passed
00:07:13.246    Test: bs_grow_live ...passed
00:07:13.246    Test: bs_grow_live_no_space ...passed
00:07:13.246    Test: bs_test_grow ...passed
00:07:13.246    Test: blob_serialize_test ...passed
00:07:13.246    Test: super_block_crc ...passed
00:07:13.246    Test: blob_thin_prov_write_count_io ...passed
00:07:13.246    Test: blob_thin_prov_unmap_cluster ...passed
00:07:13.246    Test: bs_load_iter_test ...passed
00:07:13.246    Test: blob_relations ...[2024-11-18 05:52:34.072674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.246  [2024-11-18 05:52:34.072804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.246  [2024-11-18 05:52:34.073848] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.246  [2024-11-18 05:52:34.073927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.246  passed
00:07:13.246    Test: blob_relations2 ...[2024-11-18 05:52:34.085340] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.246  [2024-11-18 05:52:34.085436] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.247  [2024-11-18 05:52:34.085462] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.247  [2024-11-18 05:52:34.085475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.247  [2024-11-18 05:52:34.087335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.247  [2024-11-18 05:52:34.087412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.247  [2024-11-18 05:52:34.088057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:13.247  [2024-11-18 05:52:34.088118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.247  passed
00:07:13.247    Test: blob_relations3 ...passed
00:07:13.247    Test: blobstore_clean_power_failure ...passed
00:07:13.247    Test: blob_delete_snapshot_power_failure ...[2024-11-18 05:52:34.199259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:13.247  [2024-11-18 05:52:34.207871] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:13.247  [2024-11-18 05:52:34.216653] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:13.247  [2024-11-18 05:52:34.216735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:13.247  [2024-11-18 05:52:34.216801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  [2024-11-18 05:52:34.226874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:13.506  [2024-11-18 05:52:34.226987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:13.506  [2024-11-18 05:52:34.227049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:13.506  [2024-11-18 05:52:34.227100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  [2024-11-18 05:52:34.236565] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:13.506  [2024-11-18 05:52:34.236703] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:13.506  [2024-11-18 05:52:34.236727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:13.506  [2024-11-18 05:52:34.236751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  [2024-11-18 05:52:34.246385] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:13.506  [2024-11-18 05:52:34.246484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  [2024-11-18 05:52:34.257185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:13.506  [2024-11-18 05:52:34.257378] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  [2024-11-18 05:52:34.268041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:13.506  [2024-11-18 05:52:34.268164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:13.506  passed
00:07:13.506    Test: blob_create_snapshot_power_failure ...[2024-11-18 05:52:34.294803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:13.506  [2024-11-18 05:52:34.302870] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:13.506  [2024-11-18 05:52:34.318525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:13.506  [2024-11-18 05:52:34.327382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:13.506  passed
00:07:13.506    Test: blob_io_unit ...passed
00:07:13.506    Test: blob_io_unit_compatibility ...passed
00:07:13.506    Test: blob_ext_md_pages ...passed
00:07:13.506    Test: blob_esnap_io_4096_4096 ...passed
00:07:13.506    Test: blob_esnap_io_512_512 ...passed
00:07:13.506    Test: blob_esnap_io_4096_512 ...passed
00:07:13.506    Test: blob_esnap_io_512_4096 ...passed
00:07:13.506    Test: blob_esnap_clone_resize ...passed
00:07:13.506  Suite: blob_bs_nocopy_extent
00:07:13.766    Test: blob_open ...passed
00:07:13.766    Test: blob_create ...[2024-11-18 05:52:34.512067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:13.766  passed
00:07:13.766    Test: blob_create_loop ...passed
00:07:13.766    Test: blob_create_fail ...[2024-11-18 05:52:34.596063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:13.766  passed
00:07:13.766    Test: blob_create_internal ...passed
00:07:13.766    Test: blob_create_zero_extent ...passed
00:07:13.766    Test: blob_snapshot ...passed
00:07:13.766    Test: blob_clone ...passed
00:07:13.766    Test: blob_inflate ...[2024-11-18 05:52:34.722851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:13.766  passed
00:07:14.050    Test: blob_delete ...passed
00:07:14.050    Test: blob_resize_test ...[2024-11-18 05:52:34.766380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:14.050  passed
00:07:14.050    Test: blob_resize_thin_test ...passed
00:07:14.050    Test: channel_ops ...passed
00:07:14.050    Test: blob_super ...passed
00:07:14.050    Test: blob_rw_verify_iov ...passed
00:07:14.050    Test: blob_unmap ...passed
00:07:14.050    Test: blob_iter ...passed
00:07:14.050    Test: blob_parse_md ...passed
00:07:14.050    Test: bs_load_pending_removal ...passed
00:07:14.050    Test: bs_unload ...[2024-11-18 05:52:34.942984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:14.050  passed
00:07:14.050    Test: bs_usable_clusters ...passed
00:07:14.050    Test: blob_crc ...[2024-11-18 05:52:34.985868] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:14.050  [2024-11-18 05:52:34.986005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:14.050  passed
00:07:14.326    Test: blob_flags ...passed
00:07:14.326    Test: bs_version ...passed
00:07:14.326    Test: blob_set_xattrs_test ...[2024-11-18 05:52:35.057592] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:14.326  [2024-11-18 05:52:35.057728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:14.326  passed
00:07:14.326    Test: blob_thin_prov_alloc ...passed
00:07:14.326    Test: blob_insert_cluster_msg_test ...passed
00:07:14.326    Test: blob_thin_prov_rw ...passed
00:07:14.326    Test: blob_thin_prov_rle ...passed
00:07:14.326    Test: blob_thin_prov_rw_iov ...passed
00:07:14.326    Test: blob_snapshot_rw ...passed
00:07:14.326    Test: blob_snapshot_rw_iov ...passed
00:07:14.585    Test: blob_inflate_rw ...passed
00:07:14.585    Test: blob_snapshot_freeze_io ...passed
00:07:14.844    Test: blob_operation_split_rw ...passed
00:07:14.844    Test: blob_operation_split_rw_iov ...passed
00:07:14.844    Test: blob_simultaneous_operations ...[2024-11-18 05:52:35.800061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.844  [2024-11-18 05:52:35.800183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.844  [2024-11-18 05:52:35.801362] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.844  [2024-11-18 05:52:35.801421] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.844  [2024-11-18 05:52:35.811065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.844  [2024-11-18 05:52:35.811166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:14.844  [2024-11-18 05:52:35.811290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:14.844  [2024-11-18 05:52:35.811310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.104  passed
00:07:15.104    Test: blob_persist_test ...passed
00:07:15.104    Test: blob_decouple_snapshot ...passed
00:07:15.104    Test: blob_seek_io_unit ...passed
00:07:15.104    Test: blob_nested_freezes ...passed
00:07:15.104    Test: blob_clone_resize ...passed
00:07:15.104    Test: blob_shallow_copy ...[2024-11-18 05:52:35.994983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:15.104  [2024-11-18 05:52:35.995233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:15.104  [2024-11-18 05:52:35.995407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:15.104  passed
00:07:15.104  Suite: blob_blob_nocopy_extent
00:07:15.104    Test: blob_write ...passed
00:07:15.104    Test: blob_read ...passed
00:07:15.104    Test: blob_rw_verify ...passed
00:07:15.363    Test: blob_rw_verify_iov_nomem ...passed
00:07:15.363    Test: blob_rw_iov_read_only ...passed
00:07:15.363    Test: blob_xattr ...passed
00:07:15.363    Test: blob_dirty_shutdown ...passed
00:07:15.363    Test: blob_is_degraded ...passed
00:07:15.363  Suite: blob_esnap_bs_nocopy_extent
00:07:15.363    Test: blob_esnap_create ...passed
00:07:15.363    Test: blob_esnap_thread_add_remove ...passed
00:07:15.363    Test: blob_esnap_clone_snapshot ...passed
00:07:15.363    Test: blob_esnap_clone_inflate ...passed
00:07:15.363    Test: blob_esnap_clone_decouple ...passed
00:07:15.363    Test: blob_esnap_clone_reload ...passed
00:07:15.363    Test: blob_esnap_hotplug ...passed
00:07:15.363    Test: blob_set_parent ...[2024-11-18 05:52:36.332810] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:15.363  [2024-11-18 05:52:36.332904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:15.363  [2024-11-18 05:52:36.333020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:15.363  [2024-11-18 05:52:36.333049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:15.363  [2024-11-18 05:52:36.333529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:15.363  passed
00:07:15.623    Test: blob_set_external_parent ...[2024-11-18 05:52:36.354175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:15.623  [2024-11-18 05:52:36.354268] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:15.623  [2024-11-18 05:52:36.354303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:15.623  [2024-11-18 05:52:36.354673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:15.623  passed
00:07:15.623  Suite: blob_copy_noextent
00:07:15.623    Test: blob_init ...[2024-11-18 05:52:36.361822] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:15.623  passed
00:07:15.623    Test: blob_thin_provision ...passed
00:07:15.623    Test: blob_read_only ...passed
00:07:15.623    Test: bs_load ...[2024-11-18 05:52:36.388652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:15.623  passed
00:07:15.623    Test: bs_load_custom_cluster_size ...passed
00:07:15.623    Test: bs_load_after_failed_grow ...passed
00:07:15.623    Test: bs_cluster_sz ...[2024-11-18 05:52:36.402731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:15.623  [2024-11-18 05:52:36.402985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:15.623  [2024-11-18 05:52:36.403041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:15.623  passed
00:07:15.623    Test: bs_resize_md ...passed
00:07:15.623    Test: bs_destroy ...passed
00:07:15.623    Test: bs_type ...passed
00:07:15.623    Test: bs_super_block ...passed
00:07:15.623    Test: bs_test_recover_cluster_count ...passed
00:07:15.623    Test: bs_grow_live ...passed
00:07:15.623    Test: bs_grow_live_no_space ...passed
00:07:15.623    Test: bs_test_grow ...passed
00:07:15.623    Test: blob_serialize_test ...passed
00:07:15.623    Test: super_block_crc ...passed
00:07:15.623    Test: blob_thin_prov_write_count_io ...passed
00:07:15.623    Test: blob_thin_prov_unmap_cluster ...passed
00:07:15.623    Test: bs_load_iter_test ...passed
00:07:15.623    Test: blob_relations ...[2024-11-18 05:52:36.520300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.520437] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  [2024-11-18 05:52:36.521182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.521252] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  passed
00:07:15.623    Test: blob_relations2 ...[2024-11-18 05:52:36.530918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.531013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  [2024-11-18 05:52:36.531052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.531065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  [2024-11-18 05:52:36.532112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.532178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  [2024-11-18 05:52:36.532516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:15.623  [2024-11-18 05:52:36.532561] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.623  passed
00:07:15.623    Test: blob_relations3 ...passed
00:07:15.883    Test: blobstore_clean_power_failure ...passed
00:07:15.883    Test: blob_delete_snapshot_power_failure ...[2024-11-18 05:52:36.635722] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.883  [2024-11-18 05:52:36.643491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:15.883  [2024-11-18 05:52:36.643602] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:15.883  [2024-11-18 05:52:36.643626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.883  [2024-11-18 05:52:36.651353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.883  [2024-11-18 05:52:36.651433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:15.883  [2024-11-18 05:52:36.651467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:15.883  [2024-11-18 05:52:36.651485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.883  [2024-11-18 05:52:36.659352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:15.883  [2024-11-18 05:52:36.659472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.883  [2024-11-18 05:52:36.667510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:15.883  [2024-11-18 05:52:36.667655] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.883  [2024-11-18 05:52:36.675568] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:15.883  [2024-11-18 05:52:36.675677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:15.883  passed
00:07:15.883    Test: blob_create_snapshot_power_failure ...[2024-11-18 05:52:36.699643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:15.883  [2024-11-18 05:52:36.715976] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:15.883  [2024-11-18 05:52:36.725439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:15.883  passed
00:07:15.883    Test: blob_io_unit ...passed
00:07:15.883    Test: blob_io_unit_compatibility ...passed
00:07:15.883    Test: blob_ext_md_pages ...passed
00:07:15.883    Test: blob_esnap_io_4096_4096 ...passed
00:07:15.883    Test: blob_esnap_io_512_512 ...passed
00:07:15.883    Test: blob_esnap_io_4096_512 ...passed
00:07:15.883    Test: blob_esnap_io_512_4096 ...passed
00:07:16.142    Test: blob_esnap_clone_resize ...passed
00:07:16.142  Suite: blob_bs_copy_noextent
00:07:16.142    Test: blob_open ...passed
00:07:16.142    Test: blob_create ...[2024-11-18 05:52:36.909935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:16.142  passed
00:07:16.142    Test: blob_create_loop ...passed
00:07:16.142    Test: blob_create_fail ...[2024-11-18 05:52:36.973308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.142  passed
00:07:16.142    Test: blob_create_internal ...passed
00:07:16.142    Test: blob_create_zero_extent ...passed
00:07:16.142    Test: blob_snapshot ...passed
00:07:16.142    Test: blob_clone ...passed
00:07:16.142    Test: blob_inflate ...[2024-11-18 05:52:37.070536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:16.142  passed
00:07:16.142    Test: blob_delete ...passed
00:07:16.142    Test: blob_resize_test ...[2024-11-18 05:52:37.109447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:16.142  passed
00:07:16.402    Test: blob_resize_thin_test ...passed
00:07:16.402    Test: channel_ops ...passed
00:07:16.402    Test: blob_super ...passed
00:07:16.402    Test: blob_rw_verify_iov ...passed
00:07:16.402    Test: blob_unmap ...passed
00:07:16.402    Test: blob_iter ...passed
00:07:16.402    Test: blob_parse_md ...passed
00:07:16.402    Test: bs_load_pending_removal ...passed
00:07:16.402    Test: bs_unload ...[2024-11-18 05:52:37.284338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:16.402  passed
00:07:16.402    Test: bs_usable_clusters ...passed
00:07:16.402    Test: blob_crc ...[2024-11-18 05:52:37.321777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:16.402  [2024-11-18 05:52:37.321912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:16.402  passed
00:07:16.402    Test: blob_flags ...passed
00:07:16.402    Test: bs_version ...passed
00:07:16.402    Test: blob_set_xattrs_test ...[2024-11-18 05:52:37.376785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.402  [2024-11-18 05:52:37.376918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:16.661  passed
00:07:16.661    Test: blob_thin_prov_alloc ...passed
00:07:16.661    Test: blob_insert_cluster_msg_test ...passed
00:07:16.661    Test: blob_thin_prov_rw ...passed
00:07:16.661    Test: blob_thin_prov_rle ...passed
00:07:16.661    Test: blob_thin_prov_rw_iov ...passed
00:07:16.661    Test: blob_snapshot_rw ...passed
00:07:16.661    Test: blob_snapshot_rw_iov ...passed
00:07:16.920    Test: blob_inflate_rw ...passed
00:07:16.920    Test: blob_snapshot_freeze_io ...passed
00:07:17.179    Test: blob_operation_split_rw ...passed
00:07:17.179    Test: blob_operation_split_rw_iov ...passed
00:07:17.179    Test: blob_simultaneous_operations ...[2024-11-18 05:52:38.081014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.179  [2024-11-18 05:52:38.081121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.179  [2024-11-18 05:52:38.081591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.179  [2024-11-18 05:52:38.081627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.179  [2024-11-18 05:52:38.084043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.179  [2024-11-18 05:52:38.084099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.179  [2024-11-18 05:52:38.084208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:17.179  [2024-11-18 05:52:38.084226] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.179  passed
00:07:17.179    Test: blob_persist_test ...passed
00:07:17.179    Test: blob_decouple_snapshot ...passed
00:07:17.438    Test: blob_seek_io_unit ...passed
00:07:17.438    Test: blob_nested_freezes ...passed
00:07:17.438    Test: blob_clone_resize ...passed
00:07:17.438    Test: blob_shallow_copy ...[2024-11-18 05:52:38.245010] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:17.438  [2024-11-18 05:52:38.245247] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:17.439  [2024-11-18 05:52:38.245382] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:17.439  passed
00:07:17.439  Suite: blob_blob_copy_noextent
00:07:17.439    Test: blob_write ...passed
00:07:17.439    Test: blob_read ...passed
00:07:17.439    Test: blob_rw_verify ...passed
00:07:17.439    Test: blob_rw_verify_iov_nomem ...passed
00:07:17.439    Test: blob_rw_iov_read_only ...passed
00:07:17.439    Test: blob_xattr ...passed
00:07:17.439    Test: blob_dirty_shutdown ...passed
00:07:17.439    Test: blob_is_degraded ...passed
00:07:17.439  Suite: blob_esnap_bs_copy_noextent
00:07:17.698    Test: blob_esnap_create ...passed
00:07:17.698    Test: blob_esnap_thread_add_remove ...passed
00:07:17.698    Test: blob_esnap_clone_snapshot ...passed
00:07:17.698    Test: blob_esnap_clone_inflate ...passed
00:07:17.698    Test: blob_esnap_clone_decouple ...passed
00:07:17.698    Test: blob_esnap_clone_reload ...passed
00:07:17.698    Test: blob_esnap_hotplug ...passed
00:07:17.698    Test: blob_set_parent ...[2024-11-18 05:52:38.572302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:17.698  [2024-11-18 05:52:38.572399] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:17.698  [2024-11-18 05:52:38.572509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:17.698  [2024-11-18 05:52:38.572537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:17.698  [2024-11-18 05:52:38.573034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:17.698  passed
00:07:17.698    Test: blob_set_external_parent ...[2024-11-18 05:52:38.593386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:17.698  [2024-11-18 05:52:38.593479] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:17.698  [2024-11-18 05:52:38.593514] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:17.698  [2024-11-18 05:52:38.593917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:17.698  passed
00:07:17.698  Suite: blob_copy_extent
00:07:17.698    Test: blob_init ...[2024-11-18 05:52:38.601355] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:17.698  passed
00:07:17.698    Test: blob_thin_provision ...passed
00:07:17.698    Test: blob_read_only ...passed
00:07:17.698    Test: bs_load ...[2024-11-18 05:52:38.633864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:17.698  passed
00:07:17.698    Test: bs_load_custom_cluster_size ...passed
00:07:17.698    Test: bs_load_after_failed_grow ...passed
00:07:17.698    Test: bs_cluster_sz ...[2024-11-18 05:52:38.651901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:17.698  [2024-11-18 05:52:38.652113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:17.698  [2024-11-18 05:52:38.652156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:17.698  passed
00:07:17.698    Test: bs_resize_md ...passed
00:07:17.958    Test: bs_destroy ...passed
00:07:17.958    Test: bs_type ...passed
00:07:17.958    Test: bs_super_block ...passed
00:07:17.958    Test: bs_test_recover_cluster_count ...passed
00:07:17.958    Test: bs_grow_live ...passed
00:07:17.958    Test: bs_grow_live_no_space ...passed
00:07:17.958    Test: bs_test_grow ...passed
00:07:17.958    Test: blob_serialize_test ...passed
00:07:17.958    Test: super_block_crc ...passed
00:07:17.958    Test: blob_thin_prov_write_count_io ...passed
00:07:17.958    Test: blob_thin_prov_unmap_cluster ...passed
00:07:17.958    Test: bs_load_iter_test ...passed
00:07:17.958    Test: blob_relations ...[2024-11-18 05:52:38.774792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.774917] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  [2024-11-18 05:52:38.775583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.775656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  passed
00:07:17.958    Test: blob_relations2 ...[2024-11-18 05:52:38.785196] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.785255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  [2024-11-18 05:52:38.785291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.785302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  [2024-11-18 05:52:38.786358] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.786429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  [2024-11-18 05:52:38.786793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:17.958  [2024-11-18 05:52:38.786844] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  passed
00:07:17.958    Test: blob_relations3 ...passed
00:07:17.958    Test: blobstore_clean_power_failure ...passed
00:07:17.958    Test: blob_delete_snapshot_power_failure ...[2024-11-18 05:52:38.870325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:17.958  [2024-11-18 05:52:38.877351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:17.958  [2024-11-18 05:52:38.884645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:17.958  [2024-11-18 05:52:38.884723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:17.958  [2024-11-18 05:52:38.884776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.958  [2024-11-18 05:52:38.892208] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:17.958  [2024-11-18 05:52:38.892339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:17.959  [2024-11-18 05:52:38.892361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:17.959  [2024-11-18 05:52:38.892381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.959  [2024-11-18 05:52:38.899733] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:17.959  [2024-11-18 05:52:38.899857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:17.959  [2024-11-18 05:52:38.899874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:17.959  [2024-11-18 05:52:38.899894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.959  [2024-11-18 05:52:38.907354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:17.959  [2024-11-18 05:52:38.907466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.959  [2024-11-18 05:52:38.915662] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:17.959  [2024-11-18 05:52:38.915826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:17.959  [2024-11-18 05:52:38.924149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:17.959  [2024-11-18 05:52:38.924294] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:18.218  passed
00:07:18.218    Test: blob_create_snapshot_power_failure ...[2024-11-18 05:52:38.950804] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:18.218  [2024-11-18 05:52:38.958651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:18.218  [2024-11-18 05:52:38.974183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:18.218  [2024-11-18 05:52:38.982424] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:18.218  passed
00:07:18.218    Test: blob_io_unit ...passed
00:07:18.218    Test: blob_io_unit_compatibility ...passed
00:07:18.218    Test: blob_ext_md_pages ...passed
00:07:18.218    Test: blob_esnap_io_4096_4096 ...passed
00:07:18.218    Test: blob_esnap_io_512_512 ...passed
00:07:18.218    Test: blob_esnap_io_4096_512 ...passed
00:07:18.218    Test: blob_esnap_io_512_4096 ...passed
00:07:18.218    Test: blob_esnap_clone_resize ...passed
00:07:18.218  Suite: blob_bs_copy_extent
00:07:18.218    Test: blob_open ...passed
00:07:18.218    Test: blob_create ...[2024-11-18 05:52:39.164783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:18.218  passed
00:07:18.477    Test: blob_create_loop ...passed
00:07:18.477    Test: blob_create_fail ...[2024-11-18 05:52:39.233433] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:18.477  passed
00:07:18.477    Test: blob_create_internal ...passed
00:07:18.477    Test: blob_create_zero_extent ...passed
00:07:18.477    Test: blob_snapshot ...passed
00:07:18.477    Test: blob_clone ...passed
00:07:18.477    Test: blob_inflate ...[2024-11-18 05:52:39.333282] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:18.477  passed
00:07:18.477    Test: blob_delete ...passed
00:07:18.477    Test: blob_resize_test ...[2024-11-18 05:52:39.376508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:18.477  passed
00:07:18.477    Test: blob_resize_thin_test ...passed
00:07:18.477    Test: channel_ops ...passed
00:07:18.477    Test: blob_super ...passed
00:07:18.736    Test: blob_rw_verify_iov ...passed
00:07:18.736    Test: blob_unmap ...passed
00:07:18.736    Test: blob_iter ...passed
00:07:18.736    Test: blob_parse_md ...passed
00:07:18.736    Test: bs_load_pending_removal ...passed
00:07:18.736    Test: bs_unload ...[2024-11-18 05:52:39.565327] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:18.736  passed
00:07:18.736    Test: bs_usable_clusters ...passed
00:07:18.736    Test: blob_crc ...[2024-11-18 05:52:39.604517] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:18.736  [2024-11-18 05:52:39.604620] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:18.736  passed
00:07:18.736    Test: blob_flags ...passed
00:07:18.736    Test: bs_version ...passed
00:07:18.736    Test: blob_set_xattrs_test ...[2024-11-18 05:52:39.660005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:18.736  [2024-11-18 05:52:39.660101] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:18.736  passed
00:07:18.996    Test: blob_thin_prov_alloc ...passed
00:07:18.996    Test: blob_insert_cluster_msg_test ...passed
00:07:18.996    Test: blob_thin_prov_rw ...passed
00:07:18.996    Test: blob_thin_prov_rle ...passed
00:07:18.996    Test: blob_thin_prov_rw_iov ...passed
00:07:18.996    Test: blob_snapshot_rw ...passed
00:07:18.996    Test: blob_snapshot_rw_iov ...passed
00:07:19.255    Test: blob_inflate_rw ...passed
00:07:19.255    Test: blob_snapshot_freeze_io ...passed
00:07:19.515    Test: blob_operation_split_rw ...passed
00:07:19.515    Test: blob_operation_split_rw_iov ...passed
00:07:19.515    Test: blob_simultaneous_operations ...[2024-11-18 05:52:40.397007] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:19.515  [2024-11-18 05:52:40.397106] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.515  [2024-11-18 05:52:40.397532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:19.515  [2024-11-18 05:52:40.397570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.515  [2024-11-18 05:52:40.399909] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:19.515  [2024-11-18 05:52:40.399963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.515  [2024-11-18 05:52:40.400051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:19.515  [2024-11-18 05:52:40.400069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:19.515  passed
00:07:19.515    Test: blob_persist_test ...passed
00:07:19.515    Test: blob_decouple_snapshot ...passed
00:07:19.515    Test: blob_seek_io_unit ...passed
00:07:19.515    Test: blob_nested_freezes ...passed
00:07:19.774    Test: blob_clone_resize ...passed
00:07:19.774    Test: blob_shallow_copy ...[2024-11-18 05:52:40.537639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:19.774  [2024-11-18 05:52:40.537887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:19.774  [2024-11-18 05:52:40.538028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:19.774  passed
00:07:19.774  Suite: blob_blob_copy_extent
00:07:19.774    Test: blob_write ...passed
00:07:19.774    Test: blob_read ...passed
00:07:19.774    Test: blob_rw_verify ...passed
00:07:19.774    Test: blob_rw_verify_iov_nomem ...passed
00:07:19.774    Test: blob_rw_iov_read_only ...passed
00:07:19.774    Test: blob_xattr ...passed
00:07:19.774    Test: blob_dirty_shutdown ...passed
00:07:19.774    Test: blob_is_degraded ...passed
00:07:19.774  Suite: blob_esnap_bs_copy_extent
00:07:19.774    Test: blob_esnap_create ...passed
00:07:20.034    Test: blob_esnap_thread_add_remove ...passed
00:07:20.034    Test: blob_esnap_clone_snapshot ...passed
00:07:20.034    Test: blob_esnap_clone_inflate ...passed
00:07:20.034    Test: blob_esnap_clone_decouple ...passed
00:07:20.034    Test: blob_esnap_clone_reload ...passed
00:07:20.034    Test: blob_esnap_hotplug ...passed
00:07:20.034    Test: blob_set_parent ...[2024-11-18 05:52:40.886638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:20.034  [2024-11-18 05:52:40.886715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:20.034  [2024-11-18 05:52:40.886877] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:20.034  [2024-11-18 05:52:40.886908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:20.034  [2024-11-18 05:52:40.887445] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:20.034  passed
00:07:20.034    Test: blob_set_external_parent ...[2024-11-18 05:52:40.906665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:20.034  [2024-11-18 05:52:40.906736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:20.034  [2024-11-18 05:52:40.906771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:20.034  [2024-11-18 05:52:40.907300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:20.034  passed
00:07:20.034  
00:07:20.034  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.034                suites     16     16    n/a      0        0
00:07:20.034                 tests    376    376    376      0        0
00:07:20.034               asserts 144129 144129 144129      0      n/a
00:07:20.034  
00:07:20.034  Elapsed time =    9.656 seconds
00:07:20.034   05:52:40 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:07:20.034  
00:07:20.034  
00:07:20.034       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.034       http://cunit.sourceforge.net/
00:07:20.034  
00:07:20.034  
00:07:20.034  Suite: blob_bdev
00:07:20.034    Test: create_bs_dev ...passed
00:07:20.034    Test: create_bs_dev_ro ...[2024-11-18 05:52:41.009809] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 539:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:07:20.034  passed
00:07:20.034    Test: create_bs_dev_rw ...passed
00:07:20.034    Test: claim_bs_dev ...passed
00:07:20.034    Test: claim_bs_dev_ro ...[2024-11-18 05:52:41.010295] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 350:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:07:20.034  passed
00:07:20.034    Test: deferred_destroy_refs ...passed
00:07:20.034    Test: deferred_destroy_channels ...passed
00:07:20.034    Test: deferred_destroy_threads ...passed
00:07:20.034  
00:07:20.034  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.034                suites      1      1    n/a      0        0
00:07:20.034                 tests      8      8      8      0        0
00:07:20.034               asserts    119    119    119      0      n/a
00:07:20.034  
00:07:20.034  Elapsed time =    0.001 seconds
00:07:20.293   05:52:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:07:20.293  
00:07:20.293  
00:07:20.293       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.293       http://cunit.sourceforge.net/
00:07:20.293  
00:07:20.293  
00:07:20.293  Suite: tree
00:07:20.293    Test: blobfs_tree_op_test ...passed
00:07:20.293  
00:07:20.293  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.293                suites      1      1    n/a      0        0
00:07:20.293                 tests      1      1      1      0        0
00:07:20.293               asserts     27     27     27      0      n/a
00:07:20.293  
00:07:20.293  Elapsed time =    0.000 seconds
00:07:20.293   05:52:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:07:20.293  
00:07:20.293  
00:07:20.293       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.293       http://cunit.sourceforge.net/
00:07:20.293  
00:07:20.293  
00:07:20.293  Suite: blobfs_async_ut
00:07:20.293    Test: fs_init ...passed
00:07:20.293    Test: fs_open ...passed
00:07:20.293    Test: fs_create ...passed
00:07:20.293    Test: fs_truncate ...passed
00:07:20.293    Test: fs_rename ...[2024-11-18 05:52:41.194251] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:07:20.293  passed
00:07:20.293    Test: fs_rw_async ...passed
00:07:20.293    Test: fs_writev_readv_async ...passed
00:07:20.293    Test: tree_find_buffer_ut ...passed
00:07:20.293    Test: channel_ops ...passed
00:07:20.293    Test: channel_ops_sync ...passed
00:07:20.293  
00:07:20.293  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.293                suites      1      1    n/a      0        0
00:07:20.293                 tests     10     10     10      0        0
00:07:20.293               asserts    292    292    292      0      n/a
00:07:20.293  
00:07:20.293  Elapsed time =    0.146 seconds
00:07:20.553   05:52:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:07:20.553  
00:07:20.553  
00:07:20.553       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.553       http://cunit.sourceforge.net/
00:07:20.553  
00:07:20.553  
00:07:20.553  Suite: blobfs_sync_ut
00:07:20.553    Test: cache_read_after_write ...[2024-11-18 05:52:41.370854] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:07:20.553  passed
00:07:20.553    Test: file_length ...passed
00:07:20.553    Test: append_write_to_extend_blob ...passed
00:07:20.553    Test: partial_buffer ...passed
00:07:20.553    Test: cache_write_null_buffer ...passed
00:07:20.553    Test: fs_create_sync ...passed
00:07:20.553    Test: fs_rename_sync ...passed
00:07:20.553    Test: cache_append_no_cache ...passed
00:07:20.553    Test: fs_delete_file_without_close ...passed
00:07:20.553  
00:07:20.553  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.553                suites      1      1    n/a      0        0
00:07:20.553                 tests      9      9      9      0        0
00:07:20.553               asserts    345    345    345      0      n/a
00:07:20.553  
00:07:20.553  Elapsed time =    0.342 seconds
00:07:20.553   05:52:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:07:20.814  
00:07:20.814  
00:07:20.814       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.814       http://cunit.sourceforge.net/
00:07:20.814  
00:07:20.814  
00:07:20.814  Suite: blobfs_bdev_ut
00:07:20.814    Test: spdk_blobfs_bdev_detect_test ...[2024-11-18 05:52:41.543824] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:20.814  passed
00:07:20.814    Test: spdk_blobfs_bdev_create_test ...[2024-11-18 05:52:41.544120] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:20.814  passed
00:07:20.814    Test: spdk_blobfs_bdev_mount_test ...passed
00:07:20.814  
00:07:20.814  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.814                suites      1      1    n/a      0        0
00:07:20.814                 tests      3      3      3      0        0
00:07:20.814               asserts      9      9      9      0      n/a
00:07:20.814  
00:07:20.814  Elapsed time =    0.001 seconds
00:07:20.814  
00:07:20.814  real	0m10.327s
00:07:20.814  user	0m9.628s
00:07:20.814  sys	0m0.878s
00:07:20.814   05:52:41 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:20.814   05:52:41 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x
00:07:20.814  ************************************
00:07:20.814  END TEST unittest_blob_blobfs
00:07:20.814  ************************************
00:07:20.814   05:52:41 unittest -- unit/unittest.sh@216 -- # run_test unittest_event unittest_event
00:07:20.814   05:52:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:20.814   05:52:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:20.814   05:52:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:20.814  ************************************
00:07:20.814  START TEST unittest_event
00:07:20.814  ************************************
00:07:20.814   05:52:41 unittest.unittest_event -- common/autotest_common.sh@1129 -- # unittest_event
00:07:20.814   05:52:41 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:07:20.814  
00:07:20.814  
00:07:20.814       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.814       http://cunit.sourceforge.net/
00:07:20.814  
00:07:20.814  
00:07:20.814  Suite: app_suite
00:07:20.814    Test: test_spdk_app_parse_args ...app_ut [options]
00:07:20.814  
00:07:20.814  CPU options:
00:07:20.814   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:20.814                                   (like [0,1,10])
00:07:20.814       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:20.814                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:20.814                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:20.814                             Within the group, '-' is used for range separator,
00:07:20.814                             ',' is used for single number separator.
00:07:20.814                             '( )' can be omitted for single element group,
00:07:20.814                             '@' can be omitted if cpus and lcores have the same value
00:07:20.814       --disable-cpumask-locks    Disable CPU core lock files.
00:07:20.814       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:20.814                             pollers in the app support interrupt mode)
00:07:20.814   -p, --main-core <id>      main (primary) core for DPDK
00:07:20.814  
00:07:20.814  Configuration options:
00:07:20.814   -c, --config, --json  <config>     JSON config file
00:07:20.814   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:20.814       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:20.814       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:20.814       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:20.814       --json-ignore-init-errors    don't exit on invalid config entry
00:07:20.814  
00:07:20.814  Memory options:
00:07:20.814       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:20.814       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:20.814       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:20.814   -R, --huge-unlink         unlink huge files after initialization
00:07:20.814   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:20.814   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:20.814       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:20.814       --no-huge             run without using hugepages
00:07:20.814       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:20.814   -i, --shm-id <id>         shared memory ID (optional)
00:07:20.814   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:20.814  
00:07:20.814  PCI options:
00:07:20.814   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:20.814   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:20.814   -u, --no-pci              disable PCI access
00:07:20.814       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:20.814  
00:07:20.814  Log options:
00:07:20.814   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:20.814       --silence-noticelog   disable notice level logging to stderr
00:07:20.814  
00:07:20.814  Trace options:
00:07:20.814       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:20.814                                   setting 0 to disable trace (default 32768)
00:07:20.814                                   Tracepoints vary in size and can use more than one trace entry.
00:07:20.814   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:20.814                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:20.814                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:20.814                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:20.814                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:20.815                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:20.815                             in /include/spdk_internal/trace_defs.h
00:07:20.815  
00:07:20.815  Other options:
00:07:20.815   -h, --help                show this usage
00:07:20.815   -v, --version             print SPDK version
00:07:20.815   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:20.815       --env-context         Opaque context for use of the env implementation
00:07:20.815  app_ut [options]
00:07:20.815  
00:07:20.815  CPU options:
00:07:20.815   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:20.815                                   (like [0,1,10])
00:07:20.815       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:20.815                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:20.815                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:20.815                             Within the group, '-' is used for range separator,
00:07:20.815                             ',' is used for single number separator.
00:07:20.815                             '( )' can be omitted for single element group,
00:07:20.815                             '@' can be omitted if cpus and lcores have the same value
00:07:20.815       --disable-cpumask-locks    Disable CPU core lock files.
00:07:20.815       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:20.815                             pollers in the app support interrupt mode)
00:07:20.815   -p, --main-core <id>      main (primary) core for DPDK
00:07:20.815  
00:07:20.815  Configuration options:
00:07:20.815   -c, --config, --json  <config>     JSON config file
00:07:20.815   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:20.815       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:20.815       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:20.815       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:20.815       --json-ignore-init-errors    don't exit on invalid config entry
00:07:20.815  
00:07:20.815  Memory options:
00:07:20.815       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:20.815       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:20.815       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:20.815   -R, --huge-unlink         unlink huge files after initialization
00:07:20.815   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:20.815   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:20.815       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:20.815       --no-huge             run without using hugepages
00:07:20.815       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:20.815   -i, --shm-id <id>         shared memory ID (optional)
00:07:20.815   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:20.815  
00:07:20.815  PCI options:
00:07:20.815   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:20.815   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:20.815   -u, --no-pci              disable PCI access
00:07:20.815       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:20.815  
00:07:20.815  Log options:
00:07:20.815   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:20.815       --silence-noticelog   disable notice level logging to stderr
00:07:20.815  
00:07:20.815  Trace options:
00:07:20.815       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:20.815                                   setting 0 to disable trace (default 32768)
00:07:20.815                                   Tracepoints vary in size and can use more than one trace entry.
00:07:20.815  app_ut: invalid option -- 'z'
00:07:20.815  app_ut: unrecognized option '--test-long-opt'
00:07:20.815   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:20.815                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:20.815                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:20.815                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:20.815                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:20.815                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:20.815                             in /include/spdk_internal/trace_defs.h
00:07:20.815  
00:07:20.815  Other options:
00:07:20.815   -h, --help                show this usage
00:07:20.815   -v, --version             print SPDK version
00:07:20.815   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:20.815       --env-context         Opaque context for use of the env implementation
00:07:20.815  app_ut [options]
00:07:20.815  
00:07:20.815  CPU options:
00:07:20.815   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:20.815                                   (like [0,1,10])
00:07:20.815       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:20.815                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:20.815                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:20.815                             Within the group, '-' is used for range separator,
00:07:20.815                             ',' is used for single number separator.
00:07:20.815                             '( )' can be omitted for single element group,
00:07:20.815                             '@' can be omitted if cpus and lcores have the same value
00:07:20.815       --disable-cpumask-locks    Disable CPU core lock files.
00:07:20.815       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:20.815                             pollers in the app support interrupt mode)
00:07:20.815   -p, --main-core <id>      main (primary) core for DPDK
00:07:20.815  
00:07:20.815  Configuration options:
00:07:20.815   -c, --config, --json  <config>     JSON config file
00:07:20.815   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:20.815       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:20.815       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:20.815       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:20.815       --json-ignore-init-errors    don't exit on invalid config entry
00:07:20.815  
00:07:20.815  Memory options:
00:07:20.815       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:20.815       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:20.815       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:20.815   -R, --huge-unlink         unlink huge files after initialization
00:07:20.815   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:20.815   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:20.815       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:20.815       --no-huge             run without using hugepages
00:07:20.815       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:20.815   -i, --shm-id <id>         shared memory ID (optional)
00:07:20.815   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:20.815  
00:07:20.815  PCI options:
00:07:20.815   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:20.815   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:20.815   -u, --no-pci              disable PCI access
00:07:20.815       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:20.815  
00:07:20.815  Log options:
00:07:20.815   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread[2024-11-18 05:52:41.625124] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1204:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:07:20.815  [2024-11-18 05:52:41.625377] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1388:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:07:20.815  , trace)
00:07:20.815       --silence-noticelog   disable notice level logging to stderr
00:07:20.815  
00:07:20.815  Trace options:
00:07:20.815       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:20.815                                   setting 0 to disable trace (default 32768)
00:07:20.815                                   Tracepoints vary in size and can use more than one trace entry.
00:07:20.815   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:20.815                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:20.815                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:20.815                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:20.815                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:20.815                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:20.815                             in /include/spdk_internal/trace_defs.h
00:07:20.815  
00:07:20.815  Other options:
00:07:20.815   -h, --help                show this usage
00:07:20.815   -v, --version             print SPDK version
00:07:20.815   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:20.815       --env-context         Opaque context for use of the env implementation
00:07:20.815  passed
00:07:20.815  
00:07:20.815  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.815                suites      1      1    n/a      0        0
00:07:20.815                 tests      1      1      1      0        0
00:07:20.815               asserts      8      8      8      0      n/a
00:07:20.815  
00:07:20.815  Elapsed time =    0.001 seconds
00:07:20.815  [2024-11-18 05:52:41.625531] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1290:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:07:20.815   05:52:41 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:07:20.815  
00:07:20.815  
00:07:20.815       CUnit - A unit testing framework for C - Version 2.1-3
00:07:20.815       http://cunit.sourceforge.net/
00:07:20.815  
00:07:20.816  
00:07:20.816  Suite: app_suite
00:07:20.816    Test: test_create_reactor ...passed
00:07:20.816    Test: test_init_reactors ...passed
00:07:20.816    Test: test_event_call ...passed
00:07:20.816    Test: test_schedule_thread ...passed
00:07:20.816    Test: test_reschedule_thread ...passed
00:07:20.816    Test: test_bind_thread ...passed
00:07:20.816    Test: test_for_each_reactor ...passed
00:07:20.816    Test: test_reactor_stats ...passed
00:07:20.816    Test: test_scheduler ...passed
00:07:20.816    Test: test_governor ...passed
00:07:20.816    Test: test_scheduler_set_isolated_core_mask ...passed
00:07:20.816    Test: test_mixed_workload ...[2024-11-18 05:52:41.686603] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:20.816  [2024-11-18 05:52:41.686824] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:20.816  passed
00:07:20.816  
00:07:20.816  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:20.816                suites      1      1    n/a      0        0
00:07:20.816                 tests     12     12     12      0        0
00:07:20.816               asserts    344    344    344      0      n/a
00:07:20.816  
00:07:20.816  Elapsed time =    0.029 seconds
00:07:20.816  
00:07:20.816  real	0m0.104s
00:07:20.816  user	0m0.058s
00:07:20.816  sys	0m0.046s
00:07:20.816   05:52:41 unittest.unittest_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:20.816   05:52:41 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x
00:07:20.816  ************************************
00:07:20.816  END TEST unittest_event
00:07:20.816  ************************************
00:07:20.816    05:52:41 unittest -- unit/unittest.sh@217 -- # uname -s
00:07:20.816   05:52:41 unittest -- unit/unittest.sh@217 -- # '[' Linux = Linux ']'
00:07:20.816   05:52:41 unittest -- unit/unittest.sh@218 -- # run_test unittest_ftl unittest_ftl
00:07:20.816   05:52:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:20.816   05:52:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:20.816   05:52:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:20.816  ************************************
00:07:20.816  START TEST unittest_ftl
00:07:20.816  ************************************
00:07:20.816   05:52:41 unittest.unittest_ftl -- common/autotest_common.sh@1129 -- # unittest_ftl
00:07:20.816   05:52:41 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut
00:07:21.075  
00:07:21.075  
00:07:21.075       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.075       http://cunit.sourceforge.net/
00:07:21.075  
00:07:21.075  
00:07:21.075  Suite: ftl_band_suite
00:07:21.075    Test: test_band_block_offset_from_addr_base ...passed
00:07:21.075    Test: test_band_block_offset_from_addr_offset ...passed
00:07:21.075    Test: test_band_addr_from_block_offset ...passed
00:07:21.075    Test: test_band_set_addr ...passed
00:07:21.075    Test: test_invalidate_addr ...passed
00:07:21.075    Test: test_next_xfer_addr ...passed
00:07:21.075  
00:07:21.075  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.075                suites      1      1    n/a      0        0
00:07:21.075                 tests      6      6      6      0        0
00:07:21.075               asserts  30356  30356  30356      0      n/a
00:07:21.075  
00:07:21.075  Elapsed time =    0.195 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_bitmap
00:07:21.334    Test: test_ftl_bitmap_create ...passed
00:07:21.334    Test: test_ftl_bitmap_get ...passed
00:07:21.334    Test: test_ftl_bitmap_set ...[2024-11-18 05:52:42.075327] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes
00:07:21.334  [2024-11-18 05:52:42.075552] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes
00:07:21.334  passed
00:07:21.334    Test: test_ftl_bitmap_clear ...passed
00:07:21.334    Test: test_ftl_bitmap_find_first_set ...passed
00:07:21.334    Test: test_ftl_bitmap_find_first_clear ...passed
00:07:21.334    Test: test_ftl_bitmap_count_set ...passed
00:07:21.334  
00:07:21.334  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.334                suites      1      1    n/a      0        0
00:07:21.334                 tests      7      7      7      0        0
00:07:21.334               asserts    137    137    137      0      n/a
00:07:21.334  
00:07:21.334  Elapsed time =    0.001 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_io_suite
00:07:21.334    Test: test_completion ...passed
00:07:21.334    Test: test_multiple_ios ...passed
00:07:21.334  
00:07:21.334  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.334                suites      1      1    n/a      0        0
00:07:21.334                 tests      2      2      2      0        0
00:07:21.334               asserts     47     47     47      0      n/a
00:07:21.334  
00:07:21.334  Elapsed time =    0.004 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_mngt
00:07:21.334    Test: test_next_step ...passed
00:07:21.334    Test: test_continue_step ...passed
00:07:21.334    Test: test_get_func_and_step_cntx_alloc ...passed
00:07:21.334    Test: test_fail_step ...passed
00:07:21.334    Test: test_mngt_call_and_call_rollback ...passed
00:07:21.334    Test: test_nested_process_failure ...passed
00:07:21.334    Test: test_call_init_success ...passed
00:07:21.334    Test: test_call_init_failure ...passed
00:07:21.334  
00:07:21.334  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.334                suites      1      1    n/a      0        0
00:07:21.334                 tests      8      8      8      0        0
00:07:21.334               asserts    196    196    196      0      n/a
00:07:21.334  
00:07:21.334  Elapsed time =    0.002 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_mempool
00:07:21.334    Test: test_ftl_mempool_create ...passed
00:07:21.334    Test: test_ftl_mempool_get_put ...passed
00:07:21.334  
00:07:21.334  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.334                suites      1      1    n/a      0        0
00:07:21.334                 tests      2      2      2      0        0
00:07:21.334               asserts     36     36     36      0      n/a
00:07:21.334  
00:07:21.334  Elapsed time =    0.000 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_addr64_suite
00:07:21.334    Test: test_addr_cached ...passed
00:07:21.334  
00:07:21.334  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.334                suites      1      1    n/a      0        0
00:07:21.334                 tests      1      1      1      0        0
00:07:21.334               asserts   1536   1536   1536      0      n/a
00:07:21.334  
00:07:21.334  Elapsed time =    0.000 seconds
00:07:21.334   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut
00:07:21.334  
00:07:21.334  
00:07:21.334       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.334       http://cunit.sourceforge.net/
00:07:21.334  
00:07:21.334  
00:07:21.334  Suite: ftl_sb
00:07:21.334    Test: test_sb_crc_v2 ...passed
00:07:21.334    Test: test_sb_crc_v3 ...passed
00:07:21.334    Test: test_sb_v3_md_layout ...[2024-11-18 05:52:42.219073] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions
00:07:21.334  [2024-11-18 05:52:42.219353] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:21.334  [2024-11-18 05:52:42.219395] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:21.335  [2024-11-18 05:52:42.219424] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:21.335  [2024-11-18 05:52:42.219450] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:21.335  [2024-11-18 05:52:42.219471] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found
00:07:21.335  [2024-11-18 05:52:42.219500] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:21.335  [2024-11-18 05:52:42.219520] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:21.335  passed
00:07:21.335    Test: test_sb_v5_md_layout ...[2024-11-18 05:52:42.219587] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:21.335  [2024-11-18 05:52:42.219618] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:21.335  [2024-11-18 05:52:42.219657] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:21.335  passed
00:07:21.335  
00:07:21.335  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.335                suites      1      1    n/a      0        0
00:07:21.335                 tests      4      4      4      0        0
00:07:21.335               asserts    170    170    170      0      n/a
00:07:21.335  
00:07:21.335  Elapsed time =    0.002 seconds
00:07:21.335   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut
00:07:21.335  
00:07:21.335  
00:07:21.335       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.335       http://cunit.sourceforge.net/
00:07:21.335  
00:07:21.335  
00:07:21.335  Suite: ftl_layout_upgrade
00:07:21.335    Test: test_l2p_upgrade ...passed
00:07:21.335  
00:07:21.335  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.335                suites      1      1    n/a      0        0
00:07:21.335                 tests      1      1      1      0        0
00:07:21.335               asserts    164    164    164      0      n/a
00:07:21.335  
00:07:21.335  Elapsed time =    0.001 seconds
00:07:21.335   05:52:42 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut
00:07:21.335  
00:07:21.335  
00:07:21.335       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.335       http://cunit.sourceforge.net/
00:07:21.335  
00:07:21.335  
00:07:21.335  Suite: ftl_p2l_suite
00:07:21.335    Test: test_p2l_num_pages ...passed
00:07:21.335    Test: test_ckpt_issue ...passed
00:07:21.594    Test: test_persist_band_p2l ...passed
00:07:21.594    Test: test_clean_restore_p2l ...passed
00:07:21.594    Test: test_dirty_restore_p2l ...passed
00:07:21.594  
00:07:21.594  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.594                suites      1      1    n/a      0        0
00:07:21.594                 tests      5      5      5      0        0
00:07:21.594               asserts  10020  10020  10020      0      n/a
00:07:21.594  
00:07:21.594  Elapsed time =    0.086 seconds
00:07:21.594  
00:07:21.594  real	0m0.606s
00:07:21.594  user	0m0.296s
00:07:21.594  sys	0m0.310s
00:07:21.594   05:52:42 unittest.unittest_ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.594   05:52:42 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x
00:07:21.594  ************************************
00:07:21.594  END TEST unittest_ftl
00:07:21.594  ************************************
00:07:21.594   05:52:42 unittest -- unit/unittest.sh@221 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:21.594   05:52:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:21.594   05:52:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.594   05:52:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:21.594  ************************************
00:07:21.594  START TEST unittest_accel
00:07:21.594  ************************************
00:07:21.594   05:52:42 unittest.unittest_accel -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:21.594  
00:07:21.594  
00:07:21.594       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.594       http://cunit.sourceforge.net/
00:07:21.594  
00:07:21.594  
00:07:21.594  Suite: accel_sequence
00:07:21.594    Test: test_sequence_fill_copy ...passed
00:07:21.594    Test: test_sequence_abort ...passed
00:07:21.594    Test: test_sequence_append_error ...passed
00:07:21.594    Test: test_sequence_completion_error ...[2024-11-18 05:52:42.466253] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7d57197fe7c0
00:07:21.594  [2024-11-18 05:52:42.466552] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7d57197fe7c0
00:07:21.594  [2024-11-18 05:52:42.466618] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7d57197fe7c0
00:07:21.594  [2024-11-18 05:52:42.466670] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7d57197fe7c0
00:07:21.594  passed
00:07:21.594    Test: test_sequence_decompress ...passed
00:07:21.594    Test: test_sequence_reverse ...passed
00:07:21.594    Test: test_sequence_copy_elision ...passed
00:07:21.594    Test: test_sequence_accel_buffers ...passed
00:07:21.594    Test: test_sequence_memory_domain ...[2024-11-18 05:52:42.483027] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2187:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:07:21.594  [2024-11-18 05:52:42.483262] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2226:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98
00:07:21.594  passed
00:07:21.594    Test: test_sequence_module_memory_domain ...passed
00:07:21.594    Test: test_sequence_crypto ...passed
00:07:21.594    Test: test_sequence_driver ...[2024-11-18 05:52:42.493357] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2334:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7d571630c7c0 using driver: ut
00:07:21.594  [2024-11-18 05:52:42.493498] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2395:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7d571630c7c0 through driver: ut
00:07:21.594  passed
00:07:21.594    Test: test_sequence_same_iovs ...passed
00:07:21.594    Test: test_sequence_crc32 ...passed
00:07:21.594    Test: test_sequence_dix_generate_verify ...passed
00:07:21.594    Test: test_sequence_dix ...passed
00:07:21.594  Suite: accel
00:07:21.594    Test: test_spdk_accel_task_complete ...passed
00:07:21.594    Test: test_get_task ...passed
00:07:21.594    Test: test_spdk_accel_submit_copy ...passed
00:07:21.594    Test: test_spdk_accel_submit_dualcast ...[2024-11-18 05:52:42.506695] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:21.594  [2024-11-18 05:52:42.506818] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:21.594  passed
00:07:21.594    Test: test_spdk_accel_submit_compare ...passed
00:07:21.594    Test: test_spdk_accel_submit_fill ...passed
00:07:21.594    Test: test_spdk_accel_submit_crc32c ...passed
00:07:21.594    Test: test_spdk_accel_submit_crc32cv ...passed
00:07:21.594    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:07:21.594    Test: test_spdk_accel_submit_xor ...passed
00:07:21.594    Test: test_spdk_accel_module_find_by_name ...passed
00:07:21.594    Test: test_spdk_accel_module_register ...passed
00:07:21.594  
00:07:21.594  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.594                suites      2      2    n/a      0        0
00:07:21.594                 tests     28     28     28      0        0
00:07:21.594               asserts    884    884    884      0      n/a
00:07:21.594  
00:07:21.594  Elapsed time =    0.057 seconds
00:07:21.594  
00:07:21.594  real	0m0.102s
00:07:21.594  user	0m0.045s
00:07:21.594  sys	0m0.057s
00:07:21.594   05:52:42 unittest.unittest_accel -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.594   05:52:42 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x
00:07:21.594  ************************************
00:07:21.594  END TEST unittest_accel
00:07:21.594  ************************************
00:07:21.854   05:52:42 unittest -- unit/unittest.sh@222 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:21.854  ************************************
00:07:21.854  START TEST unittest_ioat
00:07:21.854  ************************************
00:07:21.854   05:52:42 unittest.unittest_ioat -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:21.854  
00:07:21.854  
00:07:21.854       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.854       http://cunit.sourceforge.net/
00:07:21.854  
00:07:21.854  
00:07:21.854  Suite: ioat
00:07:21.854    Test: ioat_state_check ...passed
00:07:21.854  
00:07:21.854  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.854                suites      1      1    n/a      0        0
00:07:21.854                 tests      1      1      1      0        0
00:07:21.854               asserts     32     32     32      0      n/a
00:07:21.854  
00:07:21.854  Elapsed time =    0.000 seconds
00:07:21.854  
00:07:21.854  real	0m0.028s
00:07:21.854  user	0m0.013s
00:07:21.854  sys	0m0.016s
00:07:21.854   05:52:42 unittest.unittest_ioat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.854   05:52:42 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x
00:07:21.854  ************************************
00:07:21.854  END TEST unittest_ioat
00:07:21.854  ************************************
00:07:21.854   05:52:42 unittest -- unit/unittest.sh@223 -- # [[ y == y ]]
00:07:21.854   05:52:42 unittest -- unit/unittest.sh@224 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:21.854  ************************************
00:07:21.854  START TEST unittest_idxd_user
00:07:21.854  ************************************
00:07:21.854   05:52:42 unittest.unittest_idxd_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:21.854  
00:07:21.854  
00:07:21.854       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.854       http://cunit.sourceforge.net/
00:07:21.854  
00:07:21.854  
00:07:21.854  Suite: idxd_user
00:07:21.854    Test: test_idxd_wait_cmd ...passed
00:07:21.854    Test: test_idxd_reset_dev ...[2024-11-18 05:52:42.684933] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:21.854  [2024-11-18 05:52:42.685105] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:07:21.854  [2024-11-18 05:52:42.685195] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:21.854  passed
00:07:21.854    Test: test_idxd_group_config ...passed
00:07:21.854    Test: test_idxd_wq_config ...passed
00:07:21.854  
00:07:21.854  [2024-11-18 05:52:42.685224] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:07:21.854  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.854                suites      1      1    n/a      0        0
00:07:21.854                 tests      4      4      4      0        0
00:07:21.854               asserts     20     20     20      0      n/a
00:07:21.854  
00:07:21.854  Elapsed time =    0.001 seconds
00:07:21.854  
00:07:21.854  real	0m0.031s
00:07:21.854  user	0m0.013s
00:07:21.854  sys	0m0.018s
00:07:21.854   05:52:42 unittest.unittest_idxd_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.854   05:52:42 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x
00:07:21.854  ************************************
00:07:21.854  END TEST unittest_idxd_user
00:07:21.854  ************************************
00:07:21.854   05:52:42 unittest -- unit/unittest.sh@226 -- # run_test unittest_iscsi unittest_iscsi
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.854   05:52:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:21.854  ************************************
00:07:21.854  START TEST unittest_iscsi
00:07:21.854  ************************************
00:07:21.854   05:52:42 unittest.unittest_iscsi -- common/autotest_common.sh@1129 -- # unittest_iscsi
00:07:21.854   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:07:21.854  
00:07:21.854  
00:07:21.854       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.854       http://cunit.sourceforge.net/
00:07:21.854  
00:07:21.854  
00:07:21.854  Suite: conn_suite
00:07:21.854    Test: read_task_split_in_order_case ...passed
00:07:21.854    Test: read_task_split_reverse_order_case ...passed
00:07:21.854    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:07:21.854    Test: process_non_read_task_completion_test ...passed
00:07:21.854    Test: free_tasks_on_connection ...passed
00:07:21.854    Test: free_tasks_with_queued_datain ...passed
00:07:21.854    Test: abort_queued_datain_task_test ...passed
00:07:21.854    Test: abort_queued_datain_tasks_test ...passed
00:07:21.854  
00:07:21.854  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.854                suites      1      1    n/a      0        0
00:07:21.854                 tests      8      8      8      0        0
00:07:21.854               asserts    230    230    230      0      n/a
00:07:21.854  
00:07:21.854  Elapsed time =    0.001 seconds
00:07:21.854   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:07:21.854  
00:07:21.854  
00:07:21.854       CUnit - A unit testing framework for C - Version 2.1-3
00:07:21.854       http://cunit.sourceforge.net/
00:07:21.854  
00:07:21.854  
00:07:21.854  Suite: iscsi_suite
00:07:21.854    Test: param_negotiation_test ...passed
00:07:21.854    Test: list_negotiation_test ...passed
00:07:21.854    Test: parse_valid_test ...passed
00:07:21.855    Test: parse_invalid_test ...[2024-11-18 05:52:42.802580] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:21.855  [2024-11-18 05:52:42.802875] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:21.855  [2024-11-18 05:52:42.802918] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key
00:07:21.855  [2024-11-18 05:52:42.802959] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:07:21.855  [2024-11-18 05:52:42.803099] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256
00:07:21.855  [2024-11-18 05:52:42.803153] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:07:21.855  passed
00:07:21.855  
00:07:21.855  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:21.855                suites      1      1    n/a      0        0
00:07:21.855                 tests      4      4      4      0        0
00:07:21.855               asserts    161    161    161      0      n/a
00:07:21.855  
00:07:21.855  Elapsed time =    0.005 seconds
00:07:21.855  [2024-11-18 05:52:42.803242] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B
00:07:21.855   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:07:22.114  
00:07:22.114  
00:07:22.114       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.114       http://cunit.sourceforge.net/
00:07:22.114  
00:07:22.114  
00:07:22.114  Suite: iscsi_target_node_suite
00:07:22.114    Test: add_lun_test_cases ...[2024-11-18 05:52:42.833337] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:07:22.114  [2024-11-18 05:52:42.833573] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:07:22.114  [2024-11-18 05:52:42.833625] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:22.114  passed
00:07:22.114    Test: allow_any_allowed ...passed
00:07:22.114    Test: allow_ipv6_allowed ...passed
00:07:22.114    Test: allow_ipv6_denied ...passed
00:07:22.114    Test: allow_ipv6_invalid ...passed
00:07:22.114    Test: allow_ipv4_allowed ...passed
00:07:22.114    Test: allow_ipv4_denied ...passed
00:07:22.114    Test: allow_ipv4_invalid ...passed
00:07:22.114    Test: node_access_allowed ...[2024-11-18 05:52:42.833657] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:22.114  [2024-11-18 05:52:42.833699] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:07:22.114  passed
00:07:22.114    Test: node_access_denied_by_empty_netmask ...passed
00:07:22.114    Test: node_access_multi_initiator_groups_cases ...passed
00:07:22.114    Test: allow_iscsi_name_multi_maps_case ...passed
00:07:22.114    Test: chap_param_test_cases ...[2024-11-18 05:52:42.834243] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:07:22.114  passed
00:07:22.114  
00:07:22.114  [2024-11-18 05:52:42.834287] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:07:22.114  [2024-11-18 05:52:42.834326] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:07:22.114  [2024-11-18 05:52:42.834369] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:07:22.114  [2024-11-18 05:52:42.834398] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:07:22.114  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.115                suites      1      1    n/a      0        0
00:07:22.115                 tests     13     13     13      0        0
00:07:22.115               asserts     50     50     50      0      n/a
00:07:22.115  
00:07:22.115  Elapsed time =    0.001 seconds
00:07:22.115   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:07:22.115  
00:07:22.115  
00:07:22.115       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.115       http://cunit.sourceforge.net/
00:07:22.115  
00:07:22.115  
00:07:22.115  Suite: iscsi_suite
00:07:22.115    Test: op_login_check_target_test ...passed
00:07:22.115    Test: op_login_session_normal_test ...[2024-11-18 05:52:42.872255] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied
00:07:22.115  [2024-11-18 05:52:42.872534] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:22.115  [2024-11-18 05:52:42.872579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:22.115  [2024-11-18 05:52:42.872614] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:22.115  [2024-11-18 05:52:42.872677] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:07:22.115  passed
00:07:22.115    Test: maxburstlength_test ...[2024-11-18 05:52:42.872774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:22.115  [2024-11-18 05:52:42.872838] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:07:22.115  [2024-11-18 05:52:42.872869] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:22.115  passed
00:07:22.115    Test: underflow_for_read_transfer_test ...[2024-11-18 05:52:42.873102] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:22.115  [2024-11-18 05:52:42.873157] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:07:22.115  passed
00:07:22.115    Test: underflow_for_zero_read_transfer_test ...passed
00:07:22.115    Test: underflow_for_request_sense_test ...passed
00:07:22.115    Test: underflow_for_check_condition_test ...passed
00:07:22.115    Test: add_transfer_task_test ...passed
00:07:22.115    Test: get_transfer_task_test ...passed
00:07:22.115    Test: del_transfer_task_test ...passed
00:07:22.115    Test: clear_all_transfer_tasks_test ...passed
00:07:22.115    Test: build_iovs_test ...passed
00:07:22.115    Test: build_iovs_with_md_test ...passed
00:07:22.115    Test: pdu_hdr_op_login_test ...[2024-11-18 05:52:42.874835] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error
00:07:22.115  [2024-11-18 05:52:42.874939] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:07:22.115  [2024-11-18 05:52:42.875007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:07:22.115  passed
00:07:22.115    Test: pdu_hdr_op_text_test ...[2024-11-18 05:52:42.875089] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:22.115  [2024-11-18 05:52:42.875154] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:07:22.115  [2024-11-18 05:52:42.875195] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:07:22.115  passed
00:07:22.115    Test: pdu_hdr_op_logout_test ...passed
00:07:22.115    Test: pdu_hdr_op_scsi_test ...[2024-11-18 05:52:42.875264] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:07:22.115  [2024-11-18 05:52:42.875404] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:22.115  [2024-11-18 05:52:42.875448] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:22.115  [2024-11-18 05:52:42.875474] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:07:22.115  [2024-11-18 05:52:42.875545] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:22.115  [2024-11-18 05:52:42.875624] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:07:22.115  [2024-11-18 05:52:42.875823] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:07:22.115  passed
00:07:22.115    Test: pdu_hdr_op_task_mgmt_test ...[2024-11-18 05:52:42.875923] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:07:22.115  [2024-11-18 05:52:42.876014] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:07:22.115  passed
00:07:22.115    Test: pdu_hdr_op_nopout_test ...[2024-11-18 05:52:42.876209] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:07:22.115  [2024-11-18 05:52:42.876288] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:22.115  [2024-11-18 05:52:42.876325] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:22.115  [2024-11-18 05:52:42.876352] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:07:22.115  passed
00:07:22.115    Test: pdu_hdr_op_data_test ...[2024-11-18 05:52:42.876398] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:07:22.115  [2024-11-18 05:52:42.876443] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:07:22.115  [2024-11-18 05:52:42.876486] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:22.115  [2024-11-18 05:52:42.876518] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:07:22.115  [2024-11-18 05:52:42.876580] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:07:22.115  [2024-11-18 05:52:42.876636] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:07:22.115  [2024-11-18 05:52:42.876719] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:07:22.115  passed
00:07:22.115    Test: empty_text_with_cbit_test ...passed
00:07:22.115    Test: pdu_payload_read_test ...[2024-11-18 05:52:42.878886] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:07:22.115  passed
00:07:22.115    Test: data_out_pdu_sequence_test ...passed
00:07:22.115    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:07:22.115  
00:07:22.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.115                suites      1      1    n/a      0        0
00:07:22.115                 tests     24     24     24      0        0
00:07:22.115               asserts 150253 150253 150253      0      n/a
00:07:22.115  
00:07:22.115  Elapsed time =    0.017 seconds
00:07:22.115   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:07:22.115  
00:07:22.115  
00:07:22.115       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.115       http://cunit.sourceforge.net/
00:07:22.115  
00:07:22.115  
00:07:22.115  Suite: init_grp_suite
00:07:22.115    Test: create_initiator_group_success_case ...passed
00:07:22.115    Test: find_initiator_group_success_case ...passed
00:07:22.115    Test: register_initiator_group_twice_case ...passed
00:07:22.115    Test: add_initiator_name_success_case ...passed
00:07:22.115    Test: add_initiator_name_fail_case ...[2024-11-18 05:52:42.918900] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:07:22.115  passed
00:07:22.115    Test: delete_all_initiator_names_success_case ...passed
00:07:22.115    Test: add_netmask_success_case ...passed
00:07:22.115    Test: add_netmask_fail_case ...[2024-11-18 05:52:42.919249] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:07:22.115  passed
00:07:22.115    Test: delete_all_netmasks_success_case ...passed
00:07:22.115    Test: initiator_name_overwrite_all_to_any_case ...passed
00:07:22.115    Test: netmask_overwrite_all_to_any_case ...passed
00:07:22.115    Test: add_delete_initiator_names_case ...passed
00:07:22.115    Test: add_duplicated_initiator_names_case ...passed
00:07:22.115    Test: delete_nonexisting_initiator_names_case ...passed
00:07:22.115    Test: add_delete_netmasks_case ...passed
00:07:22.115    Test: add_duplicated_netmasks_case ...passed
00:07:22.115    Test: delete_nonexisting_netmasks_case ...passed
00:07:22.115  
00:07:22.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.115                suites      1      1    n/a      0        0
00:07:22.115                 tests     17     17     17      0        0
00:07:22.115               asserts    108    108    108      0      n/a
00:07:22.115  
00:07:22.115  Elapsed time =    0.001 seconds
00:07:22.115   05:52:42 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:07:22.115  
00:07:22.115  
00:07:22.115       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.115       http://cunit.sourceforge.net/
00:07:22.115  
00:07:22.115  
00:07:22.115  Suite: portal_grp_suite
00:07:22.115    Test: portal_create_ipv4_normal_case ...passed
00:07:22.115    Test: portal_create_ipv6_normal_case ...passed
00:07:22.115    Test: portal_create_ipv4_wildcard_case ...passed
00:07:22.115    Test: portal_create_ipv6_wildcard_case ...passed
00:07:22.115    Test: portal_create_twice_case ...passed
00:07:22.115    Test: portal_grp_register_unregister_case ...[2024-11-18 05:52:42.952502] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:07:22.115  passed
00:07:22.115    Test: portal_grp_register_twice_case ...passed
00:07:22.115    Test: portal_grp_add_delete_case ...passed
00:07:22.115    Test: portal_grp_add_delete_twice_case ...passed
00:07:22.115  
00:07:22.115  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.115                suites      1      1    n/a      0        0
00:07:22.115                 tests      9      9      9      0        0
00:07:22.115               asserts     44     44     44      0      n/a
00:07:22.115  
00:07:22.116  Elapsed time =    0.004 seconds
00:07:22.116  
00:07:22.116  real	0m0.216s
00:07:22.116  user	0m0.113s
00:07:22.116  sys	0m0.105s
00:07:22.116   05:52:42 unittest.unittest_iscsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.116   05:52:42 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x
00:07:22.116  ************************************
00:07:22.116  END TEST unittest_iscsi
00:07:22.116  ************************************
00:07:22.116   05:52:43 unittest -- unit/unittest.sh@227 -- # run_test unittest_json unittest_json
00:07:22.116   05:52:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.116   05:52:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.116   05:52:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.116  ************************************
00:07:22.116  START TEST unittest_json
00:07:22.116  ************************************
00:07:22.116   05:52:43 unittest.unittest_json -- common/autotest_common.sh@1129 -- # unittest_json
00:07:22.116   05:52:43 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:07:22.116  
00:07:22.116  
00:07:22.116       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.116       http://cunit.sourceforge.net/
00:07:22.116  
00:07:22.116  
00:07:22.116  Suite: json
00:07:22.116    Test: test_parse_literal ...passed
00:07:22.116    Test: test_parse_string_simple ...passed
00:07:22.116    Test: test_parse_string_control_chars ...passed
00:07:22.116    Test: test_parse_string_utf8 ...passed
00:07:22.116    Test: test_parse_string_escapes_twochar ...passed
00:07:22.116    Test: test_parse_string_escapes_unicode ...passed
00:07:22.116    Test: test_parse_number ...passed
00:07:22.116    Test: test_parse_array ...passed
00:07:22.116    Test: test_parse_object ...passed
00:07:22.116    Test: test_parse_nesting ...passed
00:07:22.116    Test: test_parse_comment ...passed
00:07:22.116  
00:07:22.116  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.116                suites      1      1    n/a      0        0
00:07:22.116                 tests     11     11     11      0        0
00:07:22.116               asserts   1516   1516   1516      0      n/a
00:07:22.116  
00:07:22.116  Elapsed time =    0.002 seconds
00:07:22.116   05:52:43 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:07:22.116  
00:07:22.116  
00:07:22.116       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.116       http://cunit.sourceforge.net/
00:07:22.116  
00:07:22.116  
00:07:22.116  Suite: json
00:07:22.116    Test: test_strequal ...passed
00:07:22.116    Test: test_num_to_uint16 ...passed
00:07:22.116    Test: test_num_to_int32 ...passed
00:07:22.116    Test: test_num_to_uint64 ...passed
00:07:22.116    Test: test_decode_object ...passed
00:07:22.116    Test: test_decode_array ...passed
00:07:22.116    Test: test_decode_bool ...passed
00:07:22.116    Test: test_decode_uint16 ...passed
00:07:22.116    Test: test_decode_int32 ...passed
00:07:22.116    Test: test_decode_uint32 ...passed
00:07:22.116    Test: test_decode_uint64 ...passed
00:07:22.116    Test: test_decode_string ...passed
00:07:22.116    Test: test_decode_uuid ...passed
00:07:22.116    Test: test_find ...passed
00:07:22.116    Test: test_find_array ...passed
00:07:22.116    Test: test_iterating ...passed
00:07:22.116    Test: test_free_object ...passed
00:07:22.116  
00:07:22.116  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.116                suites      1      1    n/a      0        0
00:07:22.116                 tests     17     17     17      0        0
00:07:22.116               asserts    236    236    236      0      n/a
00:07:22.116  
00:07:22.116  Elapsed time =    0.001 seconds
00:07:22.116   05:52:43 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:07:22.375  
00:07:22.375  
00:07:22.375       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.375       http://cunit.sourceforge.net/
00:07:22.375  
00:07:22.375  
00:07:22.375  Suite: json
00:07:22.375    Test: test_write_literal ...passed
00:07:22.375    Test: test_write_string_simple ...passed
00:07:22.375    Test: test_write_string_escapes ...passed
00:07:22.375    Test: test_write_string_utf16le ...passed
00:07:22.375    Test: test_write_number_int32 ...passed
00:07:22.375    Test: test_write_number_uint32 ...passed
00:07:22.375    Test: test_write_number_uint128 ...passed
00:07:22.375    Test: test_write_string_number_uint128 ...passed
00:07:22.375    Test: test_write_number_int64 ...passed
00:07:22.375    Test: test_write_number_uint64 ...passed
00:07:22.375    Test: test_write_number_double ...passed
00:07:22.375    Test: test_write_uuid ...passed
00:07:22.375    Test: test_write_array ...passed
00:07:22.375    Test: test_write_object ...passed
00:07:22.375    Test: test_write_nesting ...passed
00:07:22.375    Test: test_write_val ...passed
00:07:22.375  
00:07:22.375  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.375                suites      1      1    n/a      0        0
00:07:22.375                 tests     16     16     16      0        0
00:07:22.375               asserts    918    918    918      0      n/a
00:07:22.375  
00:07:22.375  Elapsed time =    0.007 seconds
00:07:22.375   05:52:43 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:07:22.375  
00:07:22.375  
00:07:22.375       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.375       http://cunit.sourceforge.net/
00:07:22.375  
00:07:22.375  
00:07:22.375  Suite: jsonrpc
00:07:22.375    Test: test_parse_request ...passed
00:07:22.375    Test: test_parse_request_streaming ...passed
00:07:22.375  
00:07:22.375  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.375                suites      1      1    n/a      0        0
00:07:22.375                 tests      2      2      2      0        0
00:07:22.375               asserts    289    289    289      0      n/a
00:07:22.375  
00:07:22.375  Elapsed time =    0.004 seconds
00:07:22.375  
00:07:22.375  real	0m0.131s
00:07:22.375  user	0m0.058s
00:07:22.375  sys	0m0.074s
00:07:22.375   05:52:43 unittest.unittest_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.375   05:52:43 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x
00:07:22.375  ************************************
00:07:22.375  END TEST unittest_json
00:07:22.375  ************************************
00:07:22.375   05:52:43 unittest -- unit/unittest.sh@228 -- # run_test unittest_rpc unittest_rpc
00:07:22.375   05:52:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.375   05:52:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.375   05:52:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.375  ************************************
00:07:22.375  START TEST unittest_rpc
00:07:22.375  ************************************
00:07:22.375   05:52:43 unittest.unittest_rpc -- common/autotest_common.sh@1129 -- # unittest_rpc
00:07:22.375   05:52:43 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:07:22.375  
00:07:22.375  
00:07:22.375       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.375       http://cunit.sourceforge.net/
00:07:22.375  
00:07:22.375  
00:07:22.375  Suite: rpc
00:07:22.375    Test: test_jsonrpc_handler ...passed
00:07:22.375    Test: test_spdk_rpc_is_method_allowed ...passed
00:07:22.375    Test: test_rpc_get_methods ...[2024-11-18 05:52:43.218436] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:07:22.375  passed
00:07:22.375    Test: test_rpc_spdk_get_version ...passed
00:07:22.375    Test: test_spdk_rpc_listen_close ...passed
00:07:22.375    Test: test_rpc_run_multiple_servers ...passed
00:07:22.375  
00:07:22.375  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.375                suites      1      1    n/a      0        0
00:07:22.375                 tests      6      6      6      0        0
00:07:22.375               asserts     23     23     23      0      n/a
00:07:22.376  
00:07:22.376  Elapsed time =    0.001 seconds
00:07:22.376  ************************************
00:07:22.376  END TEST unittest_rpc
00:07:22.376  ************************************
00:07:22.376  
00:07:22.376  real	0m0.028s
00:07:22.376  user	0m0.009s
00:07:22.376  sys	0m0.019s
00:07:22.376   05:52:43 unittest.unittest_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.376   05:52:43 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:22.376   05:52:43 unittest -- unit/unittest.sh@229 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.376  ************************************
00:07:22.376  START TEST unittest_notify
00:07:22.376  ************************************
00:07:22.376   05:52:43 unittest.unittest_notify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:22.376  
00:07:22.376  
00:07:22.376       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.376       http://cunit.sourceforge.net/
00:07:22.376  
00:07:22.376  
00:07:22.376  Suite: app_suite
00:07:22.376    Test: notify ...passed
00:07:22.376  
00:07:22.376  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.376                suites      1      1    n/a      0        0
00:07:22.376                 tests      1      1      1      0        0
00:07:22.376               asserts     13     13     13      0      n/a
00:07:22.376  
00:07:22.376  Elapsed time =    0.000 seconds
00:07:22.376  
00:07:22.376  real	0m0.031s
00:07:22.376  user	0m0.016s
00:07:22.376  sys	0m0.015s
00:07:22.376   05:52:43 unittest.unittest_notify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.376   05:52:43 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x
00:07:22.376  ************************************
00:07:22.376  END TEST unittest_notify
00:07:22.376  ************************************
00:07:22.376   05:52:43 unittest -- unit/unittest.sh@230 -- # run_test unittest_nvme unittest_nvme
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:22.376   05:52:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:22.635  ************************************
00:07:22.635  START TEST unittest_nvme
00:07:22.635  ************************************
00:07:22.635   05:52:43 unittest.unittest_nvme -- common/autotest_common.sh@1129 -- # unittest_nvme
00:07:22.635   05:52:43 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:07:22.635  
00:07:22.635  
00:07:22.635       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.635       http://cunit.sourceforge.net/
00:07:22.635  
00:07:22.635  
00:07:22.636  Suite: nvme
00:07:22.636    Test: test_opc_data_transfer ...passed
00:07:22.636    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:07:22.636    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:07:22.636    Test: test_trid_parse_and_compare ...[2024-11-18 05:52:43.376515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1225:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:07:22.636  [2024-11-18 05:52:43.376799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:22.636  [2024-11-18 05:52:43.376862] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1237:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:07:22.636  [2024-11-18 05:52:43.376909] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:22.636  [2024-11-18 05:52:43.376960] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1248:parse_next_key: *ERROR*: Key without value
00:07:22.636  [2024-11-18 05:52:43.377011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:22.636  passed
00:07:22.636    Test: test_trid_trtype_str ...passed
00:07:22.636    Test: test_trid_adrfam_str ...passed
00:07:22.636    Test: test_nvme_ctrlr_probe ...[2024-11-18 05:52:43.377355] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 662:nvme_ctrlr_probe: *ERROR*: NVMe controller for SSD:  is being destructed
00:07:22.636  passed
00:07:22.636    Test: test_spdk_nvme_probe_ext ...[2024-11-18 05:52:43.377435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:22.636  [2024-11-18 05:52:43.377521] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:22.636  [2024-11-18 05:52:43.377571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:22.636  [2024-11-18 05:52:43.377728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:07:22.636  passed
00:07:22.636    Test: test_spdk_nvme_connect ...[2024-11-18 05:52:43.377830] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:22.636  [2024-11-18 05:52:43.377969] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1036:spdk_nvme_connect: *ERROR*: No transport ID specified
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_probe_internal ...[2024-11-18 05:52:43.378457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:22.636  [2024-11-18 05:52:43.378654] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:22.636  [2024-11-18 05:52:43.378727] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:07:22.636  passed
00:07:22.636    Test: test_nvme_init_controllers ...[2024-11-18 05:52:43.378849] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:07:22.636  passed
00:07:22.636    Test: test_nvme_driver_init ...[2024-11-18 05:52:43.378982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 576:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:07:22.636  [2024-11-18 05:52:43.379037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:22.636  [2024-11-18 05:52:43.492976] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 594:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:07:22.636  [2024-11-18 05:52:43.493129] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 616:nvme_driver_init: *ERROR*: failed to initialize mutex
00:07:22.636  passed
00:07:22.636    Test: test_spdk_nvme_detach ...passed
00:07:22.636    Test: test_nvme_completion_poll_cb ...passed
00:07:22.636    Test: test_nvme_user_copy_cmd_complete ...passed
00:07:22.636    Test: test_nvme_allocate_request_null ...passed
00:07:22.636    Test: test_nvme_allocate_request ...passed
00:07:22.636    Test: test_nvme_free_request ...passed
00:07:22.636    Test: test_nvme_allocate_request_user_copy ...passed
00:07:22.636    Test: test_nvme_robust_mutex_init_shared ...passed
00:07:22.636    Test: test_nvme_request_check_timeout ...passed
00:07:22.636    Test: test_nvme_wait_for_completion ...passed
00:07:22.636    Test: test_spdk_nvme_parse_func ...passed
00:07:22.636    Test: test_spdk_nvme_detach_async ...passed
00:07:22.636    Test: test_nvme_parse_addr ...[2024-11-18 05:52:43.494350] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1682:nvme_parse_addr: *ERROR*: getaddrinfo failed: Name or service not known (-2)
00:07:22.636  passed
00:07:22.636  
00:07:22.636  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:22.636                suites      1      1    n/a      0        0
00:07:22.636                 tests     25     25     25      0        0
00:07:22.636               asserts    331    331    331      0      n/a
00:07:22.636  
00:07:22.636  Elapsed time =    0.007 seconds
00:07:22.636   05:52:43 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:07:22.636  
00:07:22.636  
00:07:22.636       CUnit - A unit testing framework for C - Version 2.1-3
00:07:22.636       http://cunit.sourceforge.net/
00:07:22.636  
00:07:22.636  
00:07:22.636  Suite: nvme_ctrlr
00:07:22.636    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-18 05:52:43.530246] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-18 05:52:43.532103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-18 05:52:43.533484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-18 05:52:43.534825] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-18 05:52:43.536160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  [2024-11-18 05:52:43.537370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-18 05:52:43.538604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-18 05:52:43.539853] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-18 05:52:43.542337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  [2024-11-18 05:52:43.544733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-18 05:52:43.545983] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:22.636    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-18 05:52:43.548517] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  [2024-11-18 05:52:43.549752] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-18 05:52:43.552200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:22.636    Test: test_nvme_ctrlr_init_delay ...[2024-11-18 05:52:43.554946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_alloc_io_qpair_rr_1 ...[2024-11-18 05:52:43.556376] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  [2024-11-18 05:52:43.556681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:22.636  [2024-11-18 05:52:43.556845] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:22.636  [2024-11-18 05:52:43.556900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:22.636  [2024-11-18 05:52:43.556950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:22.636  passed
00:07:22.636    Test: test_ctrlr_get_default_ctrlr_opts ...passed
00:07:22.636    Test: test_ctrlr_get_default_io_qpair_opts ...passed
00:07:22.636    Test: test_alloc_io_qpair_wrr_1 ...[2024-11-18 05:52:43.557093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_alloc_io_qpair_wrr_2 ...[2024-11-18 05:52:43.557321] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  [2024-11-18 05:52:43.557471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:22.636  passed
00:07:22.636    Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-18 05:52:43.557728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5051:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_update_firmware invalid size!
00:07:22.636  [2024-11-18 05:52:43.557839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:22.636  [2024-11-18 05:52:43.557927] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5128:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] nvme_ctrlr_cmd_fw_commit failed!
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_fail ...passed
00:07:22.636    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...[2024-11-18 05:52:43.558019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:22.636  [2024-11-18 05:52:43.558092] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [, 0] in failed state.
00:07:22.636  passed
00:07:22.636    Test: test_nvme_ctrlr_set_supported_features ...passed
00:07:22.636    Test: test_nvme_ctrlr_set_host_feature ...[2024-11-18 05:52:43.558210] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:22.636  passed
00:07:22.636    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:07:22.636    Test: test_nvme_ctrlr_test_active_ns ...[2024-11-18 05:52:43.559664] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:07:23.207    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:07:23.207    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:07:23.207    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-18 05:52:43.888568] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-18 05:52:43.896140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-18 05:52:43.897474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  [2024-11-18 05:52:43.897575] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3039:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [, 0] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:07:23.207  passed
00:07:23.207    Test: test_alloc_io_qpair_fail ...[2024-11-18 05:52:43.898877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_add_remove_process ...passed
00:07:23.207    Test: test_nvme_ctrlr_set_arbitration_feature ...passed
00:07:23.207    Test: test_nvme_ctrlr_set_state ...passed
00:07:23.207    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-18 05:52:43.899003] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 505:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [, 0] nvme_transport_ctrlr_connect_io_qpair() failed
00:07:23.207  [2024-11-18 05:52:43.899226] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1554:_nvme_ctrlr_set_state: *ERROR*: [, 0] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:07:23.207  [2024-11-18 05:52:43.899267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-18 05:52:43.922553] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-18 05:52:43.962206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_reset ...[2024-11-18 05:52:43.963745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_aer_callback ...[2024-11-18 05:52:43.964119] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-18 05:52:43.965582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:07:23.207    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:07:23.207    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-18 05:52:43.967318] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:07:23.207    Test: test_nvme_ctrlr_ana_resize ...[2024-11-18 05:52:43.968729] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:07:23.207    Test: test_nvme_transport_ctrlr_ready ...[2024-11-18 05:52:43.970288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4194:nvme_ctrlr_process_init: *ERROR*: [, 0] Transport controller ready step failed: rc -1
00:07:23.207  [2024-11-18 05:52:43.970333] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4246:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr operation failed with error: -1, ctrlr state: 53 (error)
00:07:23.207  passed
00:07:23.207    Test: test_nvme_ctrlr_disable ...[2024-11-18 05:52:43.970378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:23.207  passed
00:07:23.207    Test: test_nvme_numa_id ...passed
00:07:23.207  
00:07:23.207  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.207                suites      1      1    n/a      0        0
00:07:23.207                 tests     45     45     45      0        0
00:07:23.207               asserts  10448  10448  10448      0      n/a
00:07:23.207  
00:07:23.207  Elapsed time =    0.399 seconds
00:07:23.207   05:52:43 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:07:23.207  
00:07:23.207  
00:07:23.207       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.207       http://cunit.sourceforge.net/
00:07:23.207  
00:07:23.207  
00:07:23.207  Suite: nvme_ctrlr_cmd
00:07:23.207    Test: test_get_log_pages ...passed
00:07:23.207    Test: test_set_feature_cmd ...passed
00:07:23.207    Test: test_set_feature_ns_cmd ...passed
00:07:23.207    Test: test_get_feature_cmd ...passed
00:07:23.207    Test: test_get_feature_ns_cmd ...passed
00:07:23.207    Test: test_abort_cmd ...passed
00:07:23.207    Test: test_set_host_id_cmds ...[2024-11-18 05:52:44.017693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:07:23.207  passed
00:07:23.207    Test: test_io_cmd_raw_no_payload_build ...passed
00:07:23.207    Test: test_io_raw_cmd ...passed
00:07:23.207    Test: test_io_raw_cmd_with_md ...passed
00:07:23.207    Test: test_namespace_attach ...passed
00:07:23.207    Test: test_namespace_detach ...passed
00:07:23.207    Test: test_namespace_create ...passed
00:07:23.207    Test: test_namespace_delete ...passed
00:07:23.207    Test: test_doorbell_buffer_config ...passed
00:07:23.207    Test: test_format_nvme ...passed
00:07:23.207    Test: test_fw_commit ...passed
00:07:23.207    Test: test_fw_image_download ...passed
00:07:23.207    Test: test_sanitize ...passed
00:07:23.207    Test: test_directive ...passed
00:07:23.207    Test: test_nvme_request_add_abort ...passed
00:07:23.207    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:07:23.207    Test: test_nvme_ctrlr_cmd_identify ...passed
00:07:23.208    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:07:23.208  
00:07:23.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.208                suites      1      1    n/a      0        0
00:07:23.208                 tests     24     24     24      0        0
00:07:23.208               asserts    198    198    198      0      n/a
00:07:23.208  
00:07:23.208  Elapsed time =    0.001 seconds
00:07:23.208   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:07:23.208  
00:07:23.208  
00:07:23.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.208       http://cunit.sourceforge.net/
00:07:23.208  
00:07:23.208  
00:07:23.208  Suite: nvme_ctrlr_cmd
00:07:23.208    Test: test_geometry_cmd ...passed
00:07:23.208    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:07:23.208  
00:07:23.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.208                suites      1      1    n/a      0        0
00:07:23.208                 tests      2      2      2      0        0
00:07:23.208               asserts      7      7      7      0      n/a
00:07:23.208  
00:07:23.208  Elapsed time =    0.000 seconds
00:07:23.208   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:07:23.208  
00:07:23.208  
00:07:23.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.208       http://cunit.sourceforge.net/
00:07:23.208  
00:07:23.208  
00:07:23.208  Suite: nvme
00:07:23.208    Test: test_nvme_ns_construct ...passed
00:07:23.208    Test: test_nvme_ns_uuid ...passed
00:07:23.208    Test: test_nvme_ns_csi ...passed
00:07:23.208    Test: test_nvme_ns_data ...passed
00:07:23.208    Test: test_nvme_ns_set_identify_data ...passed
00:07:23.208    Test: test_spdk_nvme_ns_get_values ...passed
00:07:23.208    Test: test_spdk_nvme_ns_is_active ...passed
00:07:23.208    Test: spdk_nvme_ns_supports ...passed
00:07:23.208    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:07:23.208    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:07:23.208    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:07:23.208    Test: test_nvme_ns_find_id_desc ...passed
00:07:23.208  
00:07:23.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.208                suites      1      1    n/a      0        0
00:07:23.208                 tests     12     12     12      0        0
00:07:23.208               asserts     95     95     95      0      n/a
00:07:23.208  
00:07:23.208  Elapsed time =    0.001 seconds
00:07:23.208   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:07:23.208  
00:07:23.208  
00:07:23.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.208       http://cunit.sourceforge.net/
00:07:23.208  
00:07:23.208  
00:07:23.208  Suite: nvme_ns_cmd
00:07:23.208    Test: split_test ...passed
00:07:23.208    Test: split_test2 ...passed
00:07:23.208    Test: split_test3 ...passed
00:07:23.208    Test: split_test4 ...passed
00:07:23.208    Test: test_nvme_ns_cmd_flush ...passed
00:07:23.208    Test: test_nvme_ns_cmd_dataset_management ...passed
00:07:23.208    Test: test_nvme_ns_cmd_copy ...passed
00:07:23.208    Test: test_io_flags ...[2024-11-18 05:52:44.097222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:07:23.208  passed
00:07:23.208    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:07:23.208    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:07:23.208    Test: test_nvme_ns_cmd_reservation_register ...passed
00:07:23.208    Test: test_nvme_ns_cmd_reservation_release ...passed
00:07:23.208    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:07:23.208    Test: test_nvme_ns_cmd_reservation_report ...passed
00:07:23.208    Test: test_cmd_child_request ...passed
00:07:23.208    Test: test_nvme_ns_cmd_readv ...passed
00:07:23.208    Test: test_nvme_ns_cmd_readv_sgl ...[2024-11-18 05:52:44.098074] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 390:_nvme_ns_cmd_split_request_sgl: *ERROR*: Unable to send I/O. Would require more than the supported number of SGL Elements.passed
00:07:23.208    Test: test_nvme_ns_cmd_read_with_md ...passed
00:07:23.208    Test: test_nvme_ns_cmd_writev ...passed
00:07:23.208    Test: test_nvme_ns_cmd_write_with_md ...[2024-11-18 05:52:44.098340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:07:23.208  passed
00:07:23.208    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:07:23.208    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:07:23.208    Test: test_nvme_ns_cmd_comparev ...passed
00:07:23.208    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:07:23.208    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:07:23.208    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:07:23.208    Test: test_nvme_ns_cmd_setup_request ...passed
00:07:23.208    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:07:23.208    Test: test_spdk_nvme_ns_cmd_writev_ext ...passed
00:07:23.208    Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-18 05:52:44.099724] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:23.208  passed
00:07:23.208    Test: test_nvme_ns_cmd_verify ...passed
00:07:23.208    Test: test_nvme_ns_cmd_io_mgmt_send ...passed
00:07:23.208    Test: test_nvme_ns_cmd_io_mgmt_recv ...[2024-11-18 05:52:44.099841] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:23.208  passed
00:07:23.208  
00:07:23.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.208                suites      1      1    n/a      0        0
00:07:23.208                 tests     33     33     33      0        0
00:07:23.208               asserts    569    569    569      0      n/a
00:07:23.208  
00:07:23.208  Elapsed time =    0.004 seconds
00:07:23.208   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:07:23.208  
00:07:23.208  
00:07:23.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.208       http://cunit.sourceforge.net/
00:07:23.208  
00:07:23.208  
00:07:23.208  Suite: nvme_ns_cmd
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:07:23.208    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:07:23.208  
00:07:23.208  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.208                suites      1      1    n/a      0        0
00:07:23.208                 tests     12     12     12      0        0
00:07:23.208               asserts    123    123    123      0      n/a
00:07:23.208  
00:07:23.208  Elapsed time =    0.001 seconds
00:07:23.208   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:07:23.208  
00:07:23.208  
00:07:23.208       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.208       http://cunit.sourceforge.net/
00:07:23.208  
00:07:23.208  
00:07:23.208  Suite: nvme_qpair
00:07:23.208    Test: test3 ...passed
00:07:23.208    Test: test_ctrlr_failed ...passed
00:07:23.208    Test: struct_packing ...passed
00:07:23.208    Test: test_nvme_qpair_process_completions ...[2024-11-18 05:52:44.154445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:23.208  passed
00:07:23.208    Test: test_nvme_completion_is_retry ...passed
00:07:23.208    Test: test_get_status_string ...passed
00:07:23.208    Test: test_nvme_qpair_add_cmd_error_injection ...[2024-11-18 05:52:44.154647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:23.209  [2024-11-18 05:52:44.154714] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 0
00:07:23.209  [2024-11-18 05:52:44.154744] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 1
00:07:23.209  passed
00:07:23.209    Test: test_nvme_qpair_submit_request ...passed
00:07:23.209    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:07:23.209    Test: test_nvme_qpair_manual_complete_request ...passed
00:07:23.209    Test: test_nvme_qpair_init_deinit ...[2024-11-18 05:52:44.155199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:23.209  passed
00:07:23.209    Test: test_nvme_get_sgl_print_info ...passed
00:07:23.209  
00:07:23.209  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.209                suites      1      1    n/a      0        0
00:07:23.209                 tests     12     12     12      0        0
00:07:23.209               asserts    154    154    154      0      n/a
00:07:23.209  
00:07:23.209  Elapsed time =    0.001 seconds
00:07:23.209   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:07:23.209  
00:07:23.209  
00:07:23.209       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.209       http://cunit.sourceforge.net/
00:07:23.209  
00:07:23.209  
00:07:23.209  Suite: nvme_pcie
00:07:23.209    Test: test_prp_list_append ...[2024-11-18 05:52:44.180894] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:23.209  [2024-11-18 05:52:44.181138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1271:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:07:23.209  [2024-11-18 05:52:44.181182] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1261:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:07:23.209  [2024-11-18 05:52:44.181392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:23.209  passed
00:07:23.209    Test: test_nvme_pcie_hotplug_monitor ...[2024-11-18 05:52:44.181485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:23.209  passed
00:07:23.209    Test: test_shadow_doorbell_update ...passed
00:07:23.209    Test: test_build_contig_hw_sgl_request ...passed
00:07:23.209    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:07:23.209    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:07:23.209    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:07:23.209    Test: test_nvme_pcie_qpair_build_contig_request ...passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_regs_get_set ...passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...[2024-11-18 05:52:44.181742] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:23.209  passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_config_pmr ...[2024-11-18 05:52:44.181920] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:07:23.209  [2024-11-18 05:52:44.181991] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:07:23.209  passed
00:07:23.209    Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed
00:07:23.209  
00:07:23.209  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.209                suites      1      1    n/a      0        0
00:07:23.209                 tests     14     14     14      0        0
00:07:23.209               asserts    235    235    235      0      n/a
00:07:23.209  
00:07:23.209  Elapsed time =    0.001 seconds
00:07:23.209  [2024-11-18 05:52:44.182046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:07:23.209  [2024-11-18 05:52:44.182104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:07:23.469   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:07:23.469  
00:07:23.469  
00:07:23.469       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.469       http://cunit.sourceforge.net/
00:07:23.469  
00:07:23.469  
00:07:23.469  Suite: nvme_ns_cmd
00:07:23.469    Test: nvme_poll_group_create_test ...passed
00:07:23.469    Test: nvme_poll_group_add_remove_test ...[2024-11-18 05:52:44.211489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_poll_group.c: 188:spdk_nvme_poll_group_add: *ERROR*: Queue pair without interrupts cannot be added to poll group
00:07:23.469  passed
00:07:23.469    Test: nvme_poll_group_process_completions ...passed
00:07:23.469    Test: nvme_poll_group_destroy_test ...passed
00:07:23.469    Test: nvme_poll_group_get_free_stats ...passed
00:07:23.469  
00:07:23.469  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.469                suites      1      1    n/a      0        0
00:07:23.469                 tests      5      5      5      0        0
00:07:23.469               asserts    103    103    103      0      n/a
00:07:23.469  
00:07:23.469  Elapsed time =    0.001 seconds
00:07:23.469   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:07:23.469  
00:07:23.469  
00:07:23.469       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.469       http://cunit.sourceforge.net/
00:07:23.469  
00:07:23.469  
00:07:23.469  Suite: nvme_quirks
00:07:23.469    Test: test_nvme_quirks_striping ...passed
00:07:23.469  
00:07:23.469  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.469                suites      1      1    n/a      0        0
00:07:23.469                 tests      1      1      1      0        0
00:07:23.469               asserts      5      5      5      0      n/a
00:07:23.469  
00:07:23.469  Elapsed time =    0.000 seconds
00:07:23.469   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:07:23.469  
00:07:23.469  
00:07:23.469       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.469       http://cunit.sourceforge.net/
00:07:23.469  
00:07:23.469  
00:07:23.469  Suite: nvme_tcp
00:07:23.469    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:07:23.469    Test: test_nvme_tcp_build_iovs ...passed
00:07:23.469    Test: test_nvme_tcp_build_sgl_request ...[2024-11-18 05:52:44.271229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7a185e80d2d0, and the iovcnt=16, remaining_size=28672
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed
00:07:23.469    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:07:23.469    Test: test_nvme_tcp_req_complete_safe ...passed
00:07:23.469    Test: test_nvme_tcp_req_get ...passed
00:07:23.469    Test: test_nvme_tcp_req_init ...passed
00:07:23.469    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:07:23.469    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:07:23.469    Test: test_nvme_tcp_qpair_set_recv_state ...passed
00:07:23.469    Test: test_nvme_tcp_alloc_reqs ...[2024-11-18 05:52:44.271838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e409020 is same with the state(7) to be set
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed
00:07:23.469    Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-18 05:52:44.272233] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e709080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1133:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7a185e60a760
00:07:23.469  [2024-11-18 05:52:44.272379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1192:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:07:23.469  [2024-11-18 05:52:44.272409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1143:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:07:23.469  [2024-11-18 05:52:44.272475] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272512] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:07:23.469  [2024-11-18 05:52:44.272549] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272582] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272634] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272674] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-18 05:52:44.272720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.272756] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e60a080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.273031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:07:23.469  [2024-11-18 05:52:44.273103] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_qpair_icreq_send ...[2024-11-18 05:52:44.273433] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_c2h_payload_handle ...passed
00:07:23.469    Test: test_nvme_tcp_icresp_handle ...[2024-11-18 05:52:44.273547] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7a185e60b5c0): PDU Sequence Error
00:07:23.469  [2024-11-18 05:52:44.273612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1476:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:07:23.469  [2024-11-18 05:52:44.273651] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1483:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:07:23.469  [2024-11-18 05:52:44.273690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e70b080 is same with the state(6) to be set
00:07:23.469  [2024-11-18 05:52:44.273720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1492:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:07:23.469  [2024-11-18 05:52:44.273753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e70b080 is same with the state(6) to be set
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_pdu_payload_handle ...passed
00:07:23.469    Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-18 05:52:44.273832] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e70b080 is same with the state(0) to be set
00:07:23.469  [2024-11-18 05:52:44.273891] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7a185e60c5c0): PDU Sequence Error
00:07:23.469  passed
00:07:23.469    Test: test_nvme_tcp_ctrlr_connect_qpair ...passed
00:07:23.469    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-18 05:52:44.273982] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1553:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7a185e70d210
00:07:23.469  [2024-11-18 05:52:44.274168] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 357:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7a185e8294b0, errno=0, rc=0
00:07:23.470  [2024-11-18 05:52:44.274217] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e8294b0 is same with the state(6) to be set
00:07:23.470  [2024-11-18 05:52:44.274256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a185e8294b0 is same with the state(6) to be set
00:07:23.470  passed
00:07:23.470    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-18 05:52:44.274301] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a185e8294b0 (0): Success
00:07:23.470  [2024-11-18 05:52:44.274338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a185e8294b0 (0): Success
00:07:23.470  passed
00:07:23.470    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...[2024-11-18 05:52:44.385216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:23.470  [2024-11-18 05:52:44.385337] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:23.470  passed
00:07:23.470    Test: test_nvme_tcp_poll_group_get_stats ...passed
00:07:23.470    Test: test_nvme_tcp_ctrlr_construct ...[2024-11-18 05:52:44.385675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:23.470  [2024-11-18 05:52:44.385716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:23.470  [2024-11-18 05:52:44.385944] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:23.470  [2024-11-18 05:52:44.385980] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:23.470  [2024-11-18 05:52:44.386065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:07:23.470  [2024-11-18 05:52:44.386118] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:23.470  [2024-11-18 05:52:44.386230] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515000001980 with addr=192.168.1.78, port=23
00:07:23.470  [2024-11-18 05:52:44.386273] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:23.470  passed
00:07:23.470    Test: test_nvme_tcp_qpair_submit_request ...[2024-11-18 05:52:44.386422] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x514000000c40, and the iovcnt=1, remaining_size=1024
00:07:23.470  [2024-11-18 05:52:44.386460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 977:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:07:23.470  passed
00:07:23.470  
00:07:23.470  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.470                suites      1      1    n/a      0        0
00:07:23.470                 tests     27     27     27      0        0
00:07:23.470               asserts    624    624    624      0      n/a
00:07:23.470  
00:07:23.470  Elapsed time =    0.115 seconds
00:07:23.470   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:07:23.470  
00:07:23.470  
00:07:23.470       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.470       http://cunit.sourceforge.net/
00:07:23.470  
00:07:23.470  
00:07:23.470  Suite: nvme_transport
00:07:23.470    Test: test_nvme_get_transport ...passed
00:07:23.470    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:07:23.470    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:07:23.470    Test: test_nvme_transport_poll_group_add_remove ...passed
00:07:23.470    Test: test_ctrlr_get_memory_domains ...passed
00:07:23.470  
00:07:23.470  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.470                suites      1      1    n/a      0        0
00:07:23.470                 tests      5      5      5      0        0
00:07:23.470               asserts     28     28     28      0      n/a
00:07:23.470  
00:07:23.470  Elapsed time =    0.000 seconds
00:07:23.470   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:07:23.730  
00:07:23.730  
00:07:23.730       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.730       http://cunit.sourceforge.net/
00:07:23.730  
00:07:23.730  
00:07:23.730  Suite: nvme_io_msg
00:07:23.730    Test: test_nvme_io_msg_send ...passed
00:07:23.730    Test: test_nvme_io_msg_process ...passed
00:07:23.730    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:07:23.730  
00:07:23.730  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.730                suites      1      1    n/a      0        0
00:07:23.730                 tests      3      3      3      0        0
00:07:23.730               asserts     56     56     56      0      n/a
00:07:23.730  
00:07:23.730  Elapsed time =    0.000 seconds
00:07:23.730   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:07:23.730  
00:07:23.730  
00:07:23.730       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.730       http://cunit.sourceforge.net/
00:07:23.730  
00:07:23.730  
00:07:23.730  Suite: nvme_pcie_common
00:07:23.730    Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed
00:07:23.730    Test: test_nvme_pcie_qpair_construct_destroy ...[2024-11-18 05:52:44.472022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 112:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:07:23.730  passed
00:07:23.730    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:07:23.730    Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-18 05:52:44.472796] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 541:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:07:23.730  passed
00:07:23.730    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-11-18 05:52:44.472851] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 494:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:07:23.730  [2024-11-18 05:52:44.472879] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 588:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:07:23.730  passed
00:07:23.730    Test: test_nvme_pcie_poll_group_get_stats ...passed
00:07:23.730  
00:07:23.730  [2024-11-18 05:52:44.473208] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:23.730  [2024-11-18 05:52:44.473236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:23.730  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.730                suites      1      1    n/a      0        0
00:07:23.730                 tests      6      6      6      0        0
00:07:23.730               asserts    148    148    148      0      n/a
00:07:23.730  
00:07:23.730  Elapsed time =    0.001 seconds
00:07:23.730   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:07:23.730  
00:07:23.730  
00:07:23.730       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.730       http://cunit.sourceforge.net/
00:07:23.730  
00:07:23.730  
00:07:23.730  Suite: nvme_fabric
00:07:23.730    Test: test_nvme_fabric_prop_set_cmd ...passed
00:07:23.730    Test: test_nvme_fabric_prop_get_cmd ...passed
00:07:23.730    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:07:23.730    Test: test_nvme_fabric_discover_probe ...passed
00:07:23.730    Test: test_nvme_fabric_qpair_connect ...[2024-11-18 05:52:44.500216] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:07:23.730  passed
00:07:23.730  
00:07:23.730  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.730                suites      1      1    n/a      0        0
00:07:23.730                 tests      5      5      5      0        0
00:07:23.730               asserts     60     60     60      0      n/a
00:07:23.730  
00:07:23.730  Elapsed time =    0.001 seconds
00:07:23.730   05:52:44 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:07:23.730  
00:07:23.730  
00:07:23.730       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.730       http://cunit.sourceforge.net/
00:07:23.730  
00:07:23.730  
00:07:23.730  Suite: nvme_opal
00:07:23.730    Test: test_opal_nvme_security_recv_send_done ...passed
00:07:23.730    Test: test_opal_add_short_atom_header ...passed
00:07:23.730  
00:07:23.730  [2024-11-18 05:52:44.530105] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:07:23.730  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:23.730                suites      1      1    n/a      0        0
00:07:23.730                 tests      2      2      2      0        0
00:07:23.730               asserts     22     22     22      0      n/a
00:07:23.730  
00:07:23.730  Elapsed time =    0.000 seconds
00:07:23.730  
00:07:23.730  real	0m1.185s
00:07:23.730  user	0m0.604s
00:07:23.730  sys	0m0.435s
00:07:23.730   05:52:44 unittest.unittest_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.730   05:52:44 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:23.730  ************************************
00:07:23.730  END TEST unittest_nvme
00:07:23.730  ************************************
00:07:23.730   05:52:44 unittest -- unit/unittest.sh@231 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:23.730   05:52:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:23.730   05:52:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.730   05:52:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:23.730  ************************************
00:07:23.730  START TEST unittest_log
00:07:23.730  ************************************
00:07:23.730   05:52:44 unittest.unittest_log -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:23.730  
00:07:23.730  
00:07:23.730       CUnit - A unit testing framework for C - Version 2.1-3
00:07:23.730       http://cunit.sourceforge.net/
00:07:23.730  
00:07:23.730  
00:07:23.730  Suite: log
00:07:23.730    Test: log_test ...[2024-11-18 05:52:44.604109] log_ut.c:  56:log_test: *WARNING*: log warning unit test
00:07:23.730  [2024-11-18 05:52:44.604345] log_ut.c:  57:log_test: *DEBUG*: log test
00:07:23.730  log dump test:
00:07:23.730  passed
00:07:23.730    Test: deprecation ...00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:07:23.730  spdk dump test:
00:07:23.730  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:07:23.730  spdk dump test:
00:07:23.730  00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:07:23.730  00000010  65 20 63 68 61 72 73                              e chars
00:07:24.669  passed
00:07:24.669    Test: log_ext_test ...passed
00:07:24.669  
00:07:24.669  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.669                suites      1      1    n/a      0        0
00:07:24.669                 tests      3      3      3      0        0
00:07:24.669               asserts     77     77     77      0      n/a
00:07:24.669  
00:07:24.669  Elapsed time =    0.001 seconds
00:07:24.669  
00:07:24.669  real	0m1.033s
00:07:24.669  user	0m0.019s
00:07:24.669  sys	0m0.014s
00:07:24.669   05:52:45 unittest.unittest_log -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.669   05:52:45 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x
00:07:24.669  ************************************
00:07:24.669  END TEST unittest_log
00:07:24.669  ************************************
00:07:24.930   05:52:45 unittest -- unit/unittest.sh@232 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:24.930   05:52:45 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.930   05:52:45 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.930   05:52:45 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.930  ************************************
00:07:24.930  START TEST unittest_lvol
00:07:24.930  ************************************
00:07:24.930   05:52:45 unittest.unittest_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:24.930  
00:07:24.930  
00:07:24.930       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.930       http://cunit.sourceforge.net/
00:07:24.930  
00:07:24.930  
00:07:24.930  Suite: lvol
00:07:24.930    Test: lvs_init_unload_success ...[2024-11-18 05:52:45.700527] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:07:24.930  passed
00:07:24.930    Test: lvs_init_destroy_success ...[2024-11-18 05:52:45.701038] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:07:24.930  passed
00:07:24.930    Test: lvs_init_opts_success ...passed
00:07:24.930    Test: lvs_unload_lvs_is_null_fail ...passed
00:07:24.930    Test: lvs_names ...[2024-11-18 05:52:45.701260] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:07:24.930  [2024-11-18 05:52:45.701310] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified.
00:07:24.930  [2024-11-18 05:52:45.701346] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator.
00:07:24.930  [2024-11-18 05:52:45.701497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:07:24.930  passed
00:07:24.930    Test: lvol_create_destroy_success ...passed
00:07:24.930    Test: lvol_create_fail ...[2024-11-18 05:52:45.702081] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:07:24.930  passed
00:07:24.930    Test: lvol_destroy_fail ...[2024-11-18 05:52:45.702180] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist
00:07:24.930  [2024-11-18 05:52:45.702436] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:07:24.930  passed
00:07:24.930    Test: lvol_close ...[2024-11-18 05:52:45.702629] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist
00:07:24.930  passed
00:07:24.930    Test: lvol_resize ...[2024-11-18 05:52:45.702694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:07:24.930  passed
00:07:24.930    Test: lvol_set_read_only ...passed
00:07:24.930    Test: test_lvs_load ...[2024-11-18 05:52:45.703404] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:07:24.930  passed
00:07:24.930    Test: lvols_load ...[2024-11-18 05:52:45.703454] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:07:24.930  [2024-11-18 05:52:45.703646] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:24.930  [2024-11-18 05:52:45.703748] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:24.930  passed
00:07:24.930    Test: lvol_open ...passed
00:07:24.930    Test: lvol_snapshot ...passed
00:07:24.930    Test: lvol_snapshot_fail ...[2024-11-18 05:52:45.704448] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:07:24.930  passed
00:07:24.930    Test: lvol_clone ...passed
00:07:24.930    Test: lvol_clone_fail ...[2024-11-18 05:52:45.704865] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:07:24.930  passed
00:07:24.930    Test: lvol_iter_clones ...passed
00:07:24.930    Test: lvol_refcnt ...[2024-11-18 05:52:45.705254] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 8ef59780-9315-4abc-b44a-95bbb11cc63d because it is still open
00:07:24.930  passed
00:07:24.930    Test: lvol_names ...[2024-11-18 05:52:45.705383] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:24.930  [2024-11-18 05:52:45.705469] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:24.930  [2024-11-18 05:52:45.705652] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:07:24.930  passed
00:07:24.930    Test: lvol_create_thin_provisioned ...passed
00:07:24.930    Test: lvol_rename ...[2024-11-18 05:52:45.706025] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:24.930  passed
00:07:24.930    Test: lvs_rename ...[2024-11-18 05:52:45.706119] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:07:24.930  [2024-11-18 05:52:45.706335] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:07:24.930  passed
00:07:24.930    Test: lvol_inflate ...[2024-11-18 05:52:45.706467] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:24.930  passed
00:07:24.930    Test: lvol_decouple_parent ...[2024-11-18 05:52:45.706663] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:24.930  passed
00:07:24.930    Test: lvol_get_xattr ...passed
00:07:24.930    Test: lvol_esnap_reload ...passed
00:07:24.930    Test: lvol_esnap_create_bad_args ...[2024-11-18 05:52:45.706993] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:07:24.930  [2024-11-18 05:52:45.707039] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:24.930  [2024-11-18 05:52:45.707076] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:07:24.930  [2024-11-18 05:52:45.707130] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:24.930  [2024-11-18 05:52:45.707207] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:07:24.930  passed
00:07:24.930    Test: lvol_esnap_create_delete ...passed
00:07:24.930    Test: lvol_esnap_load_esnaps ...[2024-11-18 05:52:45.707456] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:07:24.930  passed
00:07:24.930    Test: lvol_esnap_missing ...[2024-11-18 05:52:45.707604] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:24.930  [2024-11-18 05:52:45.707643] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:24.930  passed
00:07:24.930    Test: lvol_esnap_hotplug ...
00:07:24.930  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:07:24.930  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:07:24.930  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb returns -ENOMEM
00:07:24.930  [2024-11-18 05:52:45.708239] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol e14e7287-31cb-47ae-aa2f-aeb9ecc46a64: failed to create esnap bs_dev: error -12
00:07:24.930  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:07:24.930  [2024-11-18 05:52:45.708436] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol c5e21849-2b86-4d74-bcf1-e6ea88aed1d4: failed to create esnap bs_dev: error -12
00:07:24.930  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:07:24.930  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:07:24.930  [2024-11-18 05:52:45.708539] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol dc445e3d-6317-4436-89b2-6ba9b7224215: failed to create esnap bs_dev: error -12
00:07:24.930  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:07:24.930  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:07:24.930  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:07:24.930  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:07:24.930  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:07:24.930  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:07:24.930  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:07:24.930  passed
00:07:24.930    Test: lvol_get_by ...passed
00:07:24.930    Test: lvol_shallow_copy ...[2024-11-18 05:52:45.709694] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:24.930  [2024-11-18 05:52:45.709749] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 7273c121-7220-4bba-99ec-1ae4a8509b73 shallow copy, ext_dev must not be NULL
00:07:24.931  passed
00:07:24.931    Test: lvol_set_parent ...[2024-11-18 05:52:45.709948] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL
00:07:24.931  [2024-11-18 05:52:45.709977] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL
00:07:24.931  passed
00:07:24.931    Test: lvol_set_external_parent ...[2024-11-18 05:52:45.710140] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL
00:07:24.931  [2024-11-18 05:52:45.710173] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL
00:07:24.931  [2024-11-18 05:52:45.710191] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID
00:07:24.931  passed
00:07:24.931  
00:07:24.931  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.931                suites      1      1    n/a      0        0
00:07:24.931                 tests     37     37     37      0        0
00:07:24.931               asserts   1505   1505   1505      0      n/a
00:07:24.931  
00:07:24.931  Elapsed time =    0.010 seconds
00:07:24.931  
00:07:24.931  real	0m0.051s
00:07:24.931  user	0m0.024s
00:07:24.931  sys	0m0.028s
00:07:24.931   05:52:45 unittest.unittest_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.931   05:52:45 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x
00:07:24.931  ************************************
00:07:24.931  END TEST unittest_lvol
00:07:24.931  ************************************
00:07:24.931   05:52:45 unittest -- unit/unittest.sh@233 -- # [[ y == y ]]
00:07:24.931   05:52:45 unittest -- unit/unittest.sh@234 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.931  ************************************
00:07:24.931  START TEST unittest_nvme_rdma
00:07:24.931  ************************************
00:07:24.931   05:52:45 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:24.931  
00:07:24.931  
00:07:24.931       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.931       http://cunit.sourceforge.net/
00:07:24.931  
00:07:24.931  
00:07:24.931  Suite: nvme_rdma
00:07:24.931    Test: test_nvme_rdma_build_sgl_request ...[2024-11-18 05:52:45.803737] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:07:24.931  [2024-11-18 05:52:45.804079] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1577:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:24.931  [2024-11-18 05:52:45.804131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:07:24.931    Test: test_nvme_rdma_build_contig_request ...passed
00:07:24.931    Test: test_nvme_rdma_build_contig_inline_request ...[2024-11-18 05:52:45.804267] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1529:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_create_reqs ...[2024-11-18 05:52:45.804465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 921:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_create_rsps ...passed
00:07:24.931    Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-18 05:52:45.804949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 839:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:07:24.931  [2024-11-18 05:52:45.805156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:24.931  [2024-11-18 05:52:45.805203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_poller_create ...passed
00:07:24.931    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:07:24.931    Test: test_nvme_rdma_ctrlr_construct ...[2024-11-18 05:52:45.805443] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 447:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_req_put_and_get ...passed
00:07:24.931    Test: test_nvme_rdma_req_init ...passed
00:07:24.931    Test: test_nvme_rdma_validate_cm_event ...[2024-11-18 05:52:45.805868] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_qpair_init ...passed
00:07:24.931    Test: test_nvme_rdma_qpair_submit_request ...[2024-11-18 05:52:45.805933] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:07:24.931  passed
00:07:24.931    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:07:24.931    Test: test_rdma_get_memory_translation ...[2024-11-18 05:52:45.806106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:07:24.931  passed
00:07:24.931    Test: test_get_rdma_qpair_from_wc ...passed
00:07:24.931    Test: test_nvme_rdma_ctrlr_get_max_sges ...[2024-11-18 05:52:45.806181] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-18 05:52:45.806314] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:24.931  passed
00:07:24.931    Test: test_nvme_rdma_qpair_set_poller ...[2024-11-18 05:52:45.806369] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:24.931  [2024-11-18 05:52:45.806584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:24.931  [2024-11-18 05:52:45.806649] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:07:24.931  [2024-11-18 05:52:45.806681] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x79b87ff13200 on poll group 0x50c000000040
00:07:24.931  [2024-11-18 05:52:45.806787] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:24.931  [2024-11-18 05:52:45.806839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil)
00:07:24.931  passed
00:07:24.931  
00:07:24.931  [2024-11-18 05:52:45.806876] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x79b87ff13200 on poll group 0x50c000000040
00:07:24.931  [2024-11-18 05:52:45.806962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 622:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:24.931  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.931                suites      1      1    n/a      0        0
00:07:24.931                 tests     21     21     21      0        0
00:07:24.931               asserts    395    395    395      0      n/a
00:07:24.931  
00:07:24.931  Elapsed time =    0.003 seconds
00:07:24.931  
00:07:24.931  real	0m0.037s
00:07:24.931  user	0m0.017s
00:07:24.931  sys	0m0.020s
00:07:24.931   05:52:45 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.931   05:52:45 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:24.931  ************************************
00:07:24.931  END TEST unittest_nvme_rdma
00:07:24.931  ************************************
00:07:24.931   05:52:45 unittest -- unit/unittest.sh@235 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.931   05:52:45 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:24.931  ************************************
00:07:24.931  START TEST unittest_nvmf_transport
00:07:24.931  ************************************
00:07:24.931   05:52:45 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:24.931  
00:07:24.931  
00:07:24.931       CUnit - A unit testing framework for C - Version 2.1-3
00:07:24.931       http://cunit.sourceforge.net/
00:07:24.931  
00:07:24.931  
00:07:24.931  Suite: nvmf
00:07:24.931    Test: test_spdk_nvmf_transport_create ...[2024-11-18 05:52:45.896751] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:07:24.931  [2024-11-18 05:52:45.897087] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:07:24.931  [2024-11-18 05:52:45.897157] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:07:24.931  passed
00:07:24.931    Test: test_nvmf_transport_poll_group_create ...[2024-11-18 05:52:45.897251] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:07:24.931  passed
00:07:24.931    Test: test_spdk_nvmf_transport_opts_init ...[2024-11-18 05:52:45.897540] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:07:24.931  passed
00:07:24.931    Test: test_spdk_nvmf_transport_listen_ext ...[2024-11-18 05:52:45.897593] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:07:24.931  [2024-11-18 05:52:45.897630] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:07:24.931  passed
00:07:24.931  
00:07:24.931  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:24.931                suites      1      1    n/a      0        0
00:07:24.931                 tests      4      4      4      0        0
00:07:24.931               asserts     49     49     49      0      n/a
00:07:24.931  
00:07:24.931  Elapsed time =    0.001 seconds
00:07:25.191  
00:07:25.191  real	0m0.041s
00:07:25.191  user	0m0.020s
00:07:25.191  sys	0m0.021s
00:07:25.191   05:52:45 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.191   05:52:45 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x
00:07:25.191  ************************************
00:07:25.191  END TEST unittest_nvmf_transport
00:07:25.191  ************************************
00:07:25.191   05:52:45 unittest -- unit/unittest.sh@236 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:25.191   05:52:45 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.191   05:52:45 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.191   05:52:45 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:25.191  ************************************
00:07:25.191  START TEST unittest_rdma
00:07:25.191  ************************************
00:07:25.191   05:52:45 unittest.unittest_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:25.191  
00:07:25.191  
00:07:25.191       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.191       http://cunit.sourceforge.net/
00:07:25.191  
00:07:25.191  
00:07:25.191  Suite: rdma_common
00:07:25.191    Test: test_spdk_rdma_pd ...[2024-11-18 05:52:45.990151] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:25.191  passed
00:07:25.191  
00:07:25.191  [2024-11-18 05:52:45.990492] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:25.191  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.191                suites      1      1    n/a      0        0
00:07:25.191                 tests      1      1      1      0        0
00:07:25.191               asserts     31     31     31      0      n/a
00:07:25.191  
00:07:25.191  Elapsed time =    0.001 seconds
00:07:25.191  
00:07:25.191  real	0m0.030s
00:07:25.191  user	0m0.016s
00:07:25.191  sys	0m0.014s
00:07:25.191   05:52:46 unittest.unittest_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.191   05:52:46 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:25.191  ************************************
00:07:25.191  END TEST unittest_rdma
00:07:25.191  ************************************
00:07:25.191   05:52:46 unittest -- unit/unittest.sh@237 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:25.191   05:52:46 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.191   05:52:46 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.191   05:52:46 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:25.191  ************************************
00:07:25.191  START TEST unittest_nvmf_rdma
00:07:25.191  ************************************
00:07:25.191   05:52:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:25.191  
00:07:25.191  
00:07:25.191       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.191       http://cunit.sourceforge.net/
00:07:25.191  
00:07:25.191  
00:07:25.191  Suite: nvmf
00:07:25.191    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-18 05:52:46.075309] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:07:25.191  [2024-11-18 05:52:46.075586] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:07:25.191  [2024-11-18 05:52:46.075629] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:07:25.191  passed
00:07:25.191    Test: test_spdk_nvmf_rdma_request_process ...passed
00:07:25.191    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:07:25.191    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:07:25.191    Test: test_nvmf_rdma_opts_init ...passed
00:07:25.191    Test: test_nvmf_rdma_request_free_data ...passed
00:07:25.191    Test: test_nvmf_rdma_resources_create ...passed
00:07:25.191    Test: test_nvmf_rdma_qpair_compare ...passed
00:07:25.191    Test: test_nvmf_rdma_resize_cq ...[2024-11-18 05:52:46.078521] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:07:25.191  Using CQ of insufficient size may lead to CQ overrun
00:07:25.191  passed
00:07:25.191  
00:07:25.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.192                suites      1      1    n/a      0        0
00:07:25.192                 tests      9      9      9      0        0
00:07:25.192               asserts    579    579    579      0      n/a
00:07:25.192  
00:07:25.192  Elapsed time =    0.004 seconds
00:07:25.192  [2024-11-18 05:52:46.078596] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:07:25.192  [2024-11-18 05:52:46.078655] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 968:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:25.192  
00:07:25.192  real	0m0.043s
00:07:25.192  user	0m0.027s
00:07:25.192  sys	0m0.017s
00:07:25.192   05:52:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.192   05:52:46 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:25.192  ************************************
00:07:25.192  END TEST unittest_nvmf_rdma
00:07:25.192  ************************************
00:07:25.192   05:52:46 unittest -- unit/unittest.sh@240 -- # [[ y == y ]]
00:07:25.192   05:52:46 unittest -- unit/unittest.sh@241 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:25.192   05:52:46 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.192   05:52:46 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.192   05:52:46 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:25.192  ************************************
00:07:25.192  START TEST unittest_nvme_cuse
00:07:25.192  ************************************
00:07:25.192   05:52:46 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:25.451  
00:07:25.451  
00:07:25.451       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.451       http://cunit.sourceforge.net/
00:07:25.451  
00:07:25.451  
00:07:25.451  Suite: nvme_cuse
00:07:25.451    Test: test_cuse_nvme_submit_io_read_write ...passed
00:07:25.451    Test: test_cuse_nvme_submit_io_read_write_with_md ...passed
00:07:25.451    Test: test_cuse_nvme_submit_passthru_cmd ...passed
00:07:25.451    Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed
00:07:25.451    Test: test_nvme_cuse_get_cuse_ns_device ...passed
00:07:25.451    Test: test_cuse_nvme_submit_io ...[2024-11-18 05:52:46.171021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid
00:07:25.451  passed
00:07:25.451    Test: test_cuse_nvme_reset ...passed
00:07:25.451    Test: test_nvme_cuse_stop ...[2024-11-18 05:52:46.171289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported
00:07:25.715  passed
00:07:25.715    Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed
00:07:25.715  
00:07:25.715  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.715                suites      1      1    n/a      0        0
00:07:25.715                 tests      9      9      9      0        0
00:07:25.715               asserts    118    118    118      0      n/a
00:07:25.715  
00:07:25.715  Elapsed time =    0.504 seconds
00:07:25.715  
00:07:25.715  real	0m0.536s
00:07:25.715  user	0m0.274s
00:07:25.715  sys	0m0.263s
00:07:25.715   05:52:46 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.715   05:52:46 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x
00:07:25.715  ************************************
00:07:25.715  END TEST unittest_nvme_cuse
00:07:25.715  ************************************
00:07:25.995   05:52:46 unittest -- unit/unittest.sh@244 -- # run_test unittest_nvmf unittest_nvmf
00:07:25.995   05:52:46 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.995   05:52:46 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.995   05:52:46 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:25.995  ************************************
00:07:25.995  START TEST unittest_nvmf
00:07:25.995  ************************************
00:07:25.995   05:52:46 unittest.unittest_nvmf -- common/autotest_common.sh@1129 -- # unittest_nvmf
00:07:25.995   05:52:46 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:07:25.995  
00:07:25.995  
00:07:25.995       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.995       http://cunit.sourceforge.net/
00:07:25.995  
00:07:25.995  
00:07:25.995  Suite: nvmf
00:07:25.995    Test: test_get_log_page ...passed
00:07:25.995    Test: test_process_fabrics_cmd ...passed
00:07:25.995    Test: test_connect ...[2024-11-18 05:52:46.756941] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:07:25.995  [2024-11-18 05:52:46.757141] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT
00:07:25.995  [2024-11-18 05:52:46.757609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1013:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:07:25.995  [2024-11-18 05:52:46.757659] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 876:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:07:25.995  [2024-11-18 05:52:46.757689] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1052:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:07:25.995  [2024-11-18 05:52:46.757717] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:07:25.995  [2024-11-18 05:52:46.757745] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:07:25.996  [2024-11-18 05:52:46.757856] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:07:25.996  [2024-11-18 05:52:46.757888] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:07:25.996  [2024-11-18 05:52:46.757933] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 927:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:07:25.996  [2024-11-18 05:52:46.758007] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:07:25.996  [2024-11-18 05:52:46.758072] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 677:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:07:25.996  [2024-11-18 05:52:46.758381] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:07:25.996  [2024-11-18 05:52:46.758457] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:07:25.996  [2024-11-18 05:52:46.758515] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:07:25.996  [2024-11-18 05:52:46.758597] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:07:25.996  [2024-11-18 05:52:46.758668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0)
00:07:25.996  [2024-11-18 05:52:46.758977] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil))
00:07:25.996  [2024-11-18 05:52:46.759208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil))
00:07:25.996  passed
00:07:25.996    Test: test_get_ns_id_desc_list ...passed
00:07:25.996    Test: test_identify_ns ...[2024-11-18 05:52:46.759752] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:25.996  [2024-11-18 05:52:46.760013] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:07:25.996  [2024-11-18 05:52:46.760107] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:07:25.996  passed
00:07:25.996    Test: test_identify_ns_iocs_specific ...[2024-11-18 05:52:46.760235] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:25.996  passed
00:07:25.996    Test: test_reservation_write_exclusive ...passed
00:07:25.996    Test: test_reservation_exclusive_access ...passed
00:07:25.996    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:07:25.996    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:07:25.996    Test: test_reservation_notification_log_page ...passed
00:07:25.996    Test: test_get_dif_ctx ...passed
00:07:25.996    Test: test_set_get_features ...[2024-11-18 05:52:46.760502] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:25.996  passed
00:07:25.996    Test: test_identify_ctrlr ...[2024-11-18 05:52:46.761022] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:25.996  [2024-11-18 05:52:46.761062] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:25.996  [2024-11-18 05:52:46.761081] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1660:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:07:25.996  [2024-11-18 05:52:46.761138] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1736:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:07:25.996  passed
00:07:25.996    Test: test_identify_ctrlr_iocs_specific ...passed
00:07:25.996    Test: test_custom_admin_cmd ...passed
00:07:25.996    Test: test_fused_compare_and_write ...passed
00:07:25.996    Test: test_multi_async_event_reqs ...passed
00:07:25.996    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:07:25.996    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:07:25.996    Test: test_multi_async_events ...[2024-11-18 05:52:46.761520] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4368:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:07:25.996  [2024-11-18 05:52:46.761558] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4357:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:25.996  [2024-11-18 05:52:46.761587] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4375:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:25.996  passed
00:07:25.996    Test: test_rae ...passed
00:07:25.996    Test: test_nvmf_ctrlr_create_destruct ...passed
00:07:25.996    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:07:25.996    Test: test_spdk_nvmf_request_zcopy_start ...passed
00:07:25.996    Test: test_zcopy_read ...passed
00:07:25.996    Test: test_zcopy_write ...passed
00:07:25.996    Test: test_nvmf_property_set ...passed
00:07:25.996    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-18 05:52:46.762213] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT
00:07:25.996  [2024-11-18 05:52:46.762261] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4
00:07:25.996  [2024-11-18 05:52:46.762449] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:25.996  passed
00:07:25.996    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-18 05:52:46.762504] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:25.996  [2024-11-18 05:52:46.762543] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1971:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:07:25.996  [2024-11-18 05:52:46.762562] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1977:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:07:25.996  [2024-11-18 05:52:46.762617] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:25.996  [2024-11-18 05:52:46.762645] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:25.996  passed
00:07:25.996    Test: test_nvmf_ctrlr_ns_attachment ...passed
00:07:25.996    Test: test_nvmf_check_qpair_active ...[2024-11-18 05:52:46.762829] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT
00:07:25.996  [2024-11-18 05:52:46.762861] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4874:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication
00:07:25.996  passed
00:07:25.996  
00:07:25.996  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.996                suites      1      1    n/a      0        0
00:07:25.996                 tests     32     32     32      0        0
00:07:25.996               asserts    993    993    993      0      n/a
00:07:25.996  
00:07:25.996  Elapsed time =    0.006 seconds
00:07:25.996  [2024-11-18 05:52:46.762896] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0
00:07:25.996  [2024-11-18 05:52:46.762911] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4
00:07:25.996  [2024-11-18 05:52:46.762923] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5
00:07:25.996   05:52:46 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:07:25.996  
00:07:25.996  
00:07:25.996       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.996       http://cunit.sourceforge.net/
00:07:25.996  
00:07:25.996  
00:07:25.996  Suite: nvmf
00:07:25.996    Test: test_get_rw_params ...passed
00:07:25.996    Test: test_get_rw_ext_params ...passed
00:07:25.996    Test: test_lba_in_range ...passed
00:07:25.996    Test: test_get_dif_ctx ...passed
00:07:25.996    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:07:25.996    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed
00:07:25.996    Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed
00:07:25.996    Test: test_nvmf_bdev_ctrlr_cmd ...passed
00:07:25.996    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed
00:07:25.996    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...[2024-11-18 05:52:46.796197] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 499:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:07:25.996  [2024-11-18 05:52:46.796451] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:07:25.997  [2024-11-18 05:52:46.796508] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 514:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:07:25.997  [2024-11-18 05:52:46.796573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1018:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:07:25.997  [2024-11-18 05:52:46.796614] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1025:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:07:25.997  [2024-11-18 05:52:46.796681] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 453:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:07:25.997  [2024-11-18 05:52:46.796723] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 460:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:07:25.997  [2024-11-18 05:52:46.796811] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 552:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:07:25.997  [2024-11-18 05:52:46.796855] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 559:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:07:25.997  passed
00:07:25.997  
00:07:25.997  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.997                suites      1      1    n/a      0        0
00:07:25.997                 tests     10     10     10      0        0
00:07:25.997               asserts    159    159    159      0      n/a
00:07:25.997  
00:07:25.997  Elapsed time =    0.001 seconds
00:07:25.997   05:52:46 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:07:25.997  
00:07:25.997  
00:07:25.997       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.997       http://cunit.sourceforge.net/
00:07:25.997  
00:07:25.997  
00:07:25.997  Suite: nvmf
00:07:25.997    Test: test_discovery_log ...passed
00:07:25.997    Test: test_discovery_log_with_filters ...passed
00:07:25.997  
00:07:25.997  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.997                suites      1      1    n/a      0        0
00:07:25.997                 tests      2      2      2      0        0
00:07:25.997               asserts    238    238    238      0      n/a
00:07:25.997  
00:07:25.997  Elapsed time =    0.003 seconds
00:07:25.997   05:52:46 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:07:25.997  
00:07:25.997  
00:07:25.997       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.997       http://cunit.sourceforge.net/
00:07:25.997  
00:07:25.997  
00:07:25.997  Suite: nvmf
00:07:25.997    Test: nvmf_test_create_subsystem ...[2024-11-18 05:52:46.865586] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:07:25.997  [2024-11-18 05:52:46.866023] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid
00:07:25.997  [2024-11-18 05:52:46.866367] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:07:25.997  [2024-11-18 05:52:46.866574] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid
00:07:25.997  [2024-11-18 05:52:46.866784] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:07:25.997  [2024-11-18 05:52:46.866989] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid
00:07:25.997  [2024-11-18 05:52:46.867230] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:07:25.997  passed
00:07:25.997    Test: test_spdk_nvmf_subsystem_add_ns ...passed
00:07:25.997    Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-11-18 05:52:46.867435] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid
00:07:25.997  [2024-11-18 05:52:46.867494] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:07:25.997  [2024-11-18 05:52:46.867541] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid
00:07:25.997  [2024-11-18 05:52:46.867576] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:07:25.997  [2024-11-18 05:52:46.867605] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid
00:07:25.997  [2024-11-18 05:52:46.867728] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:07:25.997  [2024-11-18 05:52:46.867781] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid
00:07:25.997  [2024-11-18 05:52:46.867907] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:07:25.997  [2024-11-18 05:52:46.867934] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid
00:07:25.997  [2024-11-18 05:52:46.868014] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:07:25.997  [2024-11-18 05:52:46.868045] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid
00:07:25.997  [2024-11-18 05:52:46.868084] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:25.997  [2024-11-18 05:52:46.868113] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:25.997  [2024-11-18 05:52:46.868144] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:25.997  [2024-11-18 05:52:46.868173] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:25.997  [2024-11-18 05:52:46.868527] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:07:25.997  [2024-11-18 05:52:46.868573] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2096:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:07:25.997  [2024-11-18 05:52:46.868804] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2230:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace.
00:07:25.997  passed
00:07:25.997    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:07:25.997    Test: test_spdk_nvmf_ns_visible ...[2024-11-18 05:52:46.869025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11
00:07:25.997  passed
00:07:25.997    Test: test_reservation_register ...[2024-11-18 05:52:46.869565] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.997  [2024-11-18 05:52:46.869694] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3277:nvmf_ns_reservation_register: *ERROR*: No registrant
00:07:25.997  passed
00:07:25.997    Test: test_reservation_register_with_ptpl ...passed
00:07:25.997    Test: test_reservation_acquire_preempt_1 ...passed
00:07:25.997    Test: test_reservation_acquire_release_with_ptpl ...[2024-11-18 05:52:46.871524] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.997  passed
00:07:25.997    Test: test_reservation_release ...passed
00:07:25.997    Test: test_reservation_unregister_notification ...[2024-11-18 05:52:46.873970] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.997  [2024-11-18 05:52:46.874181] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.997  passed
00:07:25.997    Test: test_reservation_release_notification ...[2024-11-18 05:52:46.874415] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.997  passed
00:07:25.998    Test: test_reservation_release_notification_write_exclusive ...passed
00:07:25.998    Test: test_reservation_clear_notification ...passed[2024-11-18 05:52:46.874580] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.998  [2024-11-18 05:52:46.874819] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.998  
00:07:25.998    Test: test_reservation_preempt_notification ...passed
00:07:25.998    Test: test_spdk_nvmf_ns_event ...[2024-11-18 05:52:46.875063] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:25.998  passed
00:07:25.998    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:07:25.998    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:07:25.998    Test: test_spdk_nvmf_subsystem_add_host ...passed
00:07:25.998    Test: test_nvmf_ns_reservation_report ...passed
00:07:25.998    Test: test_nvmf_nqn_is_valid ...passed
00:07:25.998    Test: test_nvmf_ns_reservation_restore ...passed
00:07:25.998    Test: test_nvmf_subsystem_state_change ...passed
00:07:25.998    Test: test_nvmf_reservation_custom_ops ...[2024-11-18 05:52:46.875864] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:07:25.998  [2024-11-18 05:52:46.875946] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport
00:07:25.998  [2024-11-18 05:52:46.876071] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3582:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:07:25.998  [2024-11-18 05:52:46.876131] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:07:25.998  [2024-11-18 05:52:46.876164] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:855dad65-9700-4634-9531-a35dffe4b94": uuid is not the correct length
00:07:25.998  [2024-11-18 05:52:46.876182] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:07:25.998  [2024-11-18 05:52:46.876272] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2776:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:07:25.998  passed
00:07:25.998  
00:07:25.998  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:25.998                suites      1      1    n/a      0        0
00:07:25.998                 tests     24     24     24      0        0
00:07:25.998               asserts    499    499    499      0      n/a
00:07:25.998  
00:07:25.998  Elapsed time =    0.010 seconds
00:07:25.998   05:52:46 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:07:25.998  
00:07:25.998  
00:07:25.998       CUnit - A unit testing framework for C - Version 2.1-3
00:07:25.998       http://cunit.sourceforge.net/
00:07:25.998  
00:07:25.998  
00:07:25.998  Suite: nvmf
00:07:25.998    Test: test_nvmf_tcp_create ...passed
00:07:25.998    Test: test_nvmf_tcp_destroy ...[2024-11-18 05:52:46.941705] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 811:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_poll_group_create ...passed
00:07:26.271    Test: test_nvmf_tcp_send_c2h_data ...passed
00:07:26.271    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:07:26.271    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:07:26.271    Test: test_nvmf_tcp_qpair_init_mem_resource ...[2024-11-18 05:52:47.026950] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d09cb0 is same with the state(5) to be set
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_send_c2h_term_req ...passed
00:07:26.271    Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed
00:07:26.271    Test: test_nvmf_tcp_icreq_handle ...passed
00:07:26.271    Test: test_nvmf_tcp_check_xfer_type ...passed
00:07:26.271    Test: test_nvmf_tcp_invalid_sgl ...[2024-11-18 05:52:47.058200] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.058293] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0b030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058344] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0b030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.058416] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0b030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058504] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:26.271  [2024-11-18 05:52:47.058557] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.058591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0d190 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058637] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:26.271  [2024-11-18 05:52:47.058668] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0d190 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058703] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.058733] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0d190 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058834] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.058868] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d0d190 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.058960] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2697:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-18 05:52:47.058997] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059036] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810d116f0 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059085] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2415:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7cd810c0c8e0
00:07:26.271  [2024-11-18 05:52:47.059138] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059184] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059238] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2472:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7cd810c0c030
00:07:26.271  [2024-11-18 05:52:47.059264] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059298] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059330] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2425:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-11-18 05:52:47.059351] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059409] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059441] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2464:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:07:26.271  [2024-11-18 05:52:47.059460] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059498] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059536] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059599] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059639] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059683] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059715] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059754] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059853] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059896] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.059947] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  [2024-11-18 05:52:47.059987] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:26.271  [2024-11-18 05:52:47.060018] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cd810c0c030 is same with the state(6) to be set
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-18 05:52:47.090400] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 584:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:07:26.271  [2024-11-18 05:52:47.090658] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 595:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:07:26.271  passed
00:07:26.271    Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-18 05:52:47.092075] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 651:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:07:26.271  [2024-11-18 05:52:47.092346] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 656:nvme_tcp_derive_retained_psk: *EpassedRROR*: Insufficient buffer size for out key!
00:07:26.271  
00:07:26.271    Test: test_nvmf_tcp_tls_generate_tls_psk ...passed
00:07:26.272  
00:07:26.272  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.272                suites      1      1    n/a      0        0
00:07:26.272                 tests     17     17     17      0        0
00:07:26.272               asserts    215    215    215      0      n/a
00:07:26.272  
00:07:26.272  Elapsed time =    0.173 seconds
00:07:26.272  [2024-11-18 05:52:47.093426] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 725:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:07:26.272  [2024-11-18 05:52:47.093494] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 749:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:07:26.272   05:52:47 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:07:26.272  
00:07:26.272  
00:07:26.272       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.272       http://cunit.sourceforge.net/
00:07:26.272  
00:07:26.272  
00:07:26.272  Suite: nvmf
00:07:26.272    Test: test_nvmf_tgt_create_poll_group ...passed
00:07:26.272  
00:07:26.272  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.272                suites      1      1    n/a      0        0
00:07:26.272                 tests      1      1      1      0        0
00:07:26.272               asserts     17     17     17      0      n/a
00:07:26.272  
00:07:26.272  Elapsed time =    0.026 seconds
00:07:26.532  
00:07:26.532  real	0m0.520s
00:07:26.532  user	0m0.212s
00:07:26.532  sys	0m0.303s
00:07:26.532  ************************************
00:07:26.532  END TEST unittest_nvmf
00:07:26.532  ************************************
00:07:26.532   05:52:47 unittest.unittest_nvmf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.532   05:52:47 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x
00:07:26.532   05:52:47 unittest -- unit/unittest.sh@245 -- # [[ n == y ]]
00:07:26.532   05:52:47 unittest -- unit/unittest.sh@250 -- # [[ n == y ]]
00:07:26.532   05:52:47 unittest -- unit/unittest.sh@254 -- # run_test unittest_scsi unittest_scsi
00:07:26.532   05:52:47 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.532   05:52:47 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.532   05:52:47 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.532  ************************************
00:07:26.532  START TEST unittest_scsi
00:07:26.532  ************************************
00:07:26.532   05:52:47 unittest.unittest_scsi -- common/autotest_common.sh@1129 -- # unittest_scsi
00:07:26.532   05:52:47 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:07:26.532  
00:07:26.532  
00:07:26.532       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.532       http://cunit.sourceforge.net/
00:07:26.532  
00:07:26.532  
00:07:26.532  Suite: dev_suite
00:07:26.532    Test: dev_destruct_null_dev ...passed
00:07:26.532    Test: dev_destruct_zero_luns ...passed
00:07:26.532    Test: dev_destruct_null_lun ...passed
00:07:26.532    Test: dev_destruct_success ...passed
00:07:26.532    Test: dev_construct_num_luns_zero ...passed
00:07:26.532    Test: dev_construct_no_lun_zero ...passed
00:07:26.532    Test: dev_construct_null_lun ...[2024-11-18 05:52:47.330584] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:07:26.532  [2024-11-18 05:52:47.330842] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:07:26.532  [2024-11-18 05:52:47.330883] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:07:26.532  passed
00:07:26.532    Test: dev_construct_name_too_long ...passed
00:07:26.532    Test: dev_construct_success ...passed
00:07:26.532    Test: dev_construct_success_lun_zero_not_first ...passed
00:07:26.532    Test: dev_queue_mgmt_task_success ...[2024-11-18 05:52:47.330940] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:07:26.532  passed
00:07:26.532    Test: dev_queue_task_success ...passed
00:07:26.532    Test: dev_stop_success ...passed
00:07:26.532    Test: dev_add_port_max_ports ...passed
00:07:26.532    Test: dev_add_port_construct_failure1 ...passed
00:07:26.532    Test: dev_add_port_construct_failure2 ...passed
00:07:26.532    Test: dev_add_port_success1 ...passed
00:07:26.532    Test: dev_add_port_success2 ...passed
00:07:26.532    Test: dev_add_port_success3 ...passed
00:07:26.532    Test: dev_find_port_by_id_num_ports_zero ...passed
00:07:26.532    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:07:26.532    Test: dev_find_port_by_id_success ...passed
00:07:26.532    Test: dev_add_lun_bdev_not_found ...passed
00:07:26.532    Test: dev_add_lun_no_free_lun_id ...passed
00:07:26.532    Test: dev_add_lun_success1 ...passed
00:07:26.532    Test: dev_add_lun_success2 ...passed
00:07:26.532    Test: dev_check_pending_tasks ...passed
00:07:26.532    Test: dev_iterate_luns ...passed
00:07:26.532    Test: dev_find_free_lun ...passed
00:07:26.532  
00:07:26.532  [2024-11-18 05:52:47.331270] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:07:26.532  [2024-11-18 05:52:47.331320] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:07:26.532  [2024-11-18 05:52:47.331362] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:07:26.532  [2024-11-18 05:52:47.331745] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:07:26.532  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.532                suites      1      1    n/a      0        0
00:07:26.532                 tests     29     29     29      0        0
00:07:26.532               asserts     97     97     97      0      n/a
00:07:26.532  
00:07:26.532  Elapsed time =    0.002 seconds
00:07:26.532   05:52:47 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:07:26.532  
00:07:26.532  
00:07:26.532       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.532       http://cunit.sourceforge.net/
00:07:26.532  
00:07:26.532  
00:07:26.532  Suite: lun_suite
00:07:26.532    Test: lun_task_mgmt_execute_abort_task_not_supported ...passed
00:07:26.532    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed
00:07:26.532    Test: lun_task_mgmt_execute_lun_reset ...passed
00:07:26.532    Test: lun_task_mgmt_execute_target_reset ...passed
00:07:26.532    Test: lun_task_mgmt_execute_invalid_case ...[2024-11-18 05:52:47.361504] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:07:26.532  [2024-11-18 05:52:47.361749] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:07:26.532  passed
00:07:26.532    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:07:26.532    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:07:26.532    Test: lun_append_task_null_lun_not_supported ...passed
00:07:26.532    Test: lun_execute_scsi_task_pending ...passed
00:07:26.532    Test: lun_execute_scsi_task_complete ...passed
00:07:26.532    Test: lun_execute_scsi_task_resize ...passed
00:07:26.532    Test: lun_destruct_success ...passed
00:07:26.532    Test: lun_construct_null_ctx ...[2024-11-18 05:52:47.361934] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:07:26.532  [2024-11-18 05:52:47.362181] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:07:26.532  passed
00:07:26.532    Test: lun_construct_success ...passed
00:07:26.532    Test: lun_reset_task_wait_scsi_task_complete ...passed
00:07:26.533    Test: lun_reset_task_suspend_scsi_task ...passed
00:07:26.533    Test: lun_check_pending_tasks_only_for_specific_initiator ...passed
00:07:26.533    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:07:26.533  
00:07:26.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.533                suites      1      1    n/a      0        0
00:07:26.533                 tests     18     18     18      0        0
00:07:26.533               asserts    153    153    153      0      n/a
00:07:26.533  
00:07:26.533  Elapsed time =    0.001 seconds
00:07:26.533   05:52:47 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:07:26.533  
00:07:26.533  
00:07:26.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.533       http://cunit.sourceforge.net/
00:07:26.533  
00:07:26.533  
00:07:26.533  Suite: scsi_suite
00:07:26.533    Test: scsi_init ...passed
00:07:26.533  
00:07:26.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.533                suites      1      1    n/a      0        0
00:07:26.533                 tests      1      1      1      0        0
00:07:26.533               asserts      1      1      1      0      n/a
00:07:26.533  
00:07:26.533  Elapsed time =    0.000 seconds
00:07:26.533   05:52:47 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:07:26.533  
00:07:26.533  
00:07:26.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.533       http://cunit.sourceforge.net/
00:07:26.533  
00:07:26.533  
00:07:26.533  Suite: translation_suite
00:07:26.533    Test: mode_select_6_test ...passed
00:07:26.533    Test: mode_select_6_test2 ...passed
00:07:26.533    Test: mode_sense_6_test ...passed
00:07:26.533    Test: mode_sense_10_test ...passed
00:07:26.533    Test: inquiry_evpd_test ...passed
00:07:26.533    Test: inquiry_standard_test ...passed
00:07:26.533    Test: inquiry_overflow_test ...passed
00:07:26.533    Test: task_complete_test ...passed
00:07:26.533    Test: lba_range_test ...passed
00:07:26.533    Test: xfer_len_test ...[2024-11-18 05:52:47.424936] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:07:26.533  passed
00:07:26.533    Test: xfer_test ...passed
00:07:26.533    Test: scsi_name_padding_test ...passed
00:07:26.533    Test: get_dif_ctx_test ...passed
00:07:26.533    Test: unmap_split_test ...passed
00:07:26.533  
00:07:26.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.533                suites      1      1    n/a      0        0
00:07:26.533                 tests     14     14     14      0        0
00:07:26.533               asserts   1205   1205   1205      0      n/a
00:07:26.533  
00:07:26.533  Elapsed time =    0.005 seconds
00:07:26.533   05:52:47 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:07:26.533  
00:07:26.533  
00:07:26.533       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.533       http://cunit.sourceforge.net/
00:07:26.533  
00:07:26.533  
00:07:26.533  Suite: reservation_suite
00:07:26.533    Test: test_reservation_register ...passed
00:07:26.533    Test: test_reservation_reserve ...passed
00:07:26.533    Test: test_all_registrant_reservation_reserve ...passed
00:07:26.533    Test: test_all_registrant_reservation_access ...passed
00:07:26.533    Test: test_reservation_preempt_non_all_regs ...passed
00:07:26.533    Test: test_reservation_preempt_all_regs ...[2024-11-18 05:52:47.455948] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.456209] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.456278] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:07:26.533  [2024-11-18 05:52:47.456354] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:07:26.533  [2024-11-18 05:52:47.456417] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.456518] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.456586] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0x8
00:07:26.533  [2024-11-18 05:52:47.456625] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0xaa
00:07:26.533  [2024-11-18 05:52:47.456715] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.456793] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:07:26.533  [2024-11-18 05:52:47.456908] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  passed
00:07:26.533    Test: test_reservation_cmds_conflict ...passed
00:07:26.533    Test: test_scsi2_reserve_release ...passed
00:07:26.533    Test: test_pr_with_scsi2_reserve_release ...[2024-11-18 05:52:47.456986] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  [2024-11-18 05:52:47.457047] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:07:26.533  [2024-11-18 05:52:47.457096] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:26.533  [2024-11-18 05:52:47.457157] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:26.533  [2024-11-18 05:52:47.457211] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:26.533  [2024-11-18 05:52:47.457239] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:26.533  passed
00:07:26.533  
00:07:26.533  [2024-11-18 05:52:47.457304] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:26.533  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.533                suites      1      1    n/a      0        0
00:07:26.533                 tests      9      9      9      0        0
00:07:26.533               asserts    344    344    344      0      n/a
00:07:26.533  
00:07:26.533  Elapsed time =    0.002 seconds
00:07:26.533  
00:07:26.533  real	0m0.156s
00:07:26.533  user	0m0.075s
00:07:26.533  sys	0m0.083s
00:07:26.533  ************************************
00:07:26.533  END TEST unittest_scsi
00:07:26.533  ************************************
00:07:26.533   05:52:47 unittest.unittest_scsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.533   05:52:47 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x
00:07:26.793    05:52:47 unittest -- unit/unittest.sh@255 -- # uname -s
00:07:26.793   05:52:47 unittest -- unit/unittest.sh@255 -- # '[' Linux = Linux ']'
00:07:26.793   05:52:47 unittest -- unit/unittest.sh@258 -- # run_test unittest_sock unittest_sock
00:07:26.793   05:52:47 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.793   05:52:47 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.793   05:52:47 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.793  ************************************
00:07:26.793  START TEST unittest_sock
00:07:26.793  ************************************
00:07:26.793   05:52:47 unittest.unittest_sock -- common/autotest_common.sh@1129 -- # unittest_sock
00:07:26.793   05:52:47 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut
00:07:26.793  
00:07:26.793  
00:07:26.793       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.793       http://cunit.sourceforge.net/
00:07:26.793  
00:07:26.793  
00:07:26.793  Suite: sock
00:07:26.793    Test: posix_sock ...passed
00:07:26.793    Test: ut_sock ...passed
00:07:26.793    Test: posix_sock_group ...passed
00:07:26.793    Test: ut_sock_group ...passed
00:07:26.793    Test: posix_sock_group_fairness ...passed
00:07:26.793    Test: _posix_sock_close ...passed
00:07:26.793    Test: sock_get_default_opts ...passed
00:07:26.793    Test: ut_sock_impl_get_set_opts ...passed
00:07:26.793    Test: posix_sock_impl_get_set_opts ...passed
00:07:26.793    Test: ut_sock_map ...passed
00:07:26.793    Test: override_impl_opts ...passed
00:07:26.793    Test: ut_sock_group_get_ctx ...passed
00:07:26.793    Test: posix_get_interface_name ...passed
00:07:26.793  
00:07:26.793  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.793                suites      1      1    n/a      0        0
00:07:26.793                 tests     13     13     13      0        0
00:07:26.793               asserts    360    360    360      0      n/a
00:07:26.793  
00:07:26.793  Elapsed time =    0.011 seconds
00:07:26.793   05:52:47 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut
00:07:26.793  
00:07:26.793  
00:07:26.793       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.793       http://cunit.sourceforge.net/
00:07:26.793  
00:07:26.793  
00:07:26.793  Suite: posix
00:07:26.793    Test: flush ...passed
00:07:26.793  
00:07:26.793  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.793                suites      1      1    n/a      0        0
00:07:26.793                 tests      1      1      1      0        0
00:07:26.793               asserts     28     28     28      0      n/a
00:07:26.793  
00:07:26.793  Elapsed time =    0.000 seconds
00:07:26.793  ************************************
00:07:26.793  END TEST unittest_sock
00:07:26.793  ************************************
00:07:26.794   05:52:47 unittest.unittest_sock -- unit/unittest.sh@128 -- # [[ n == y ]]
00:07:26.794  
00:07:26.794  real	0m0.109s
00:07:26.794  user	0m0.038s
00:07:26.794  sys	0m0.048s
00:07:26.794   05:52:47 unittest.unittest_sock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.794   05:52:47 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x
00:07:26.794   05:52:47 unittest -- unit/unittest.sh@260 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:26.794   05:52:47 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.794   05:52:47 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.794   05:52:47 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.794  ************************************
00:07:26.794  START TEST unittest_thread
00:07:26.794  ************************************
00:07:26.794   05:52:47 unittest.unittest_thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:26.794  
00:07:26.794  
00:07:26.794       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.794       http://cunit.sourceforge.net/
00:07:26.794  
00:07:26.794  
00:07:26.794  Suite: io_channel
00:07:26.794    Test: thread_alloc ...passed
00:07:26.794    Test: thread_send_msg ...passed
00:07:26.794    Test: thread_poller ...passed
00:07:26.794    Test: poller_pause ...passed
00:07:26.794    Test: thread_for_each ...passed
00:07:26.794    Test: for_each_channel_remove ...passed
00:07:26.794    Test: for_each_channel_unreg ...[2024-11-18 05:52:47.734183] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x79c660909640 already registered (old:0x513000000200 new:0x5130000003c0)
00:07:26.794  passed
00:07:26.794    Test: thread_name ...passed
00:07:26.794    Test: channel ...[2024-11-18 05:52:47.738687] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2327:spdk_get_io_channel: *ERROR*: could not find io_device 0x6254f61a03c0
00:07:26.794  passed
00:07:26.794    Test: channel_destroy_races ...passed
00:07:26.794    Test: thread_exit_test ...[2024-11-18 05:52:47.744460] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 654:thread_exit: *ERROR*: thread 0x519000007380 got timeout, and move it to the exited state forcefully
00:07:26.794  passed
00:07:26.794    Test: thread_update_stats_test ...passed
00:07:26.794    Test: nested_channel ...passed
00:07:26.794    Test: device_unregister_and_thread_exit_race ...passed
00:07:26.794    Test: cache_closest_timed_poller ...passed
00:07:26.794    Test: multi_timed_pollers_have_same_expiration ...passed
00:07:26.794    Test: io_device_lookup ...passed
00:07:26.794    Test: spdk_spin ...[2024-11-18 05:52:47.756911] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3111:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:26.794  [2024-11-18 05:52:47.756965] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x79c66090a020
00:07:26.794  [2024-11-18 05:52:47.757003] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3149:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:26.794  [2024-11-18 05:52:47.758980] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:07:26.794  [2024-11-18 05:52:47.759049] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x79c66090a020
00:07:26.794  [2024-11-18 05:52:47.759076] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:26.794  [2024-11-18 05:52:47.759115] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x79c66090a020
00:07:26.794  [2024-11-18 05:52:47.759153] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:26.794  [2024-11-18 05:52:47.759190] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x79c66090a020
00:07:26.794  [2024-11-18 05:52:47.759214] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3093:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:07:26.794  [2024-11-18 05:52:47.759242] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x79c66090a020
00:07:26.794  passed
00:07:26.794    Test: for_each_channel_and_thread_exit_race ...passed
00:07:26.794    Test: for_each_thread_and_thread_exit_race ...passed
00:07:26.794    Test: poller_get_name ...passed
00:07:27.053    Test: poller_get_id ...passed
00:07:27.053    Test: poller_get_state_str ...passed
00:07:27.053    Test: poller_get_period_ticks ...passed
00:07:27.053    Test: poller_get_stats ...passed
00:07:27.053  
00:07:27.053  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.053                suites      1      1    n/a      0        0
00:07:27.053                 tests     25     25     25      0        0
00:07:27.053               asserts    429    429    429      0      n/a
00:07:27.053  
00:07:27.053  Elapsed time =    0.065 seconds
00:07:27.053  
00:07:27.053  real	0m0.107s
00:07:27.053  user	0m0.078s
00:07:27.053  sys	0m0.029s
00:07:27.053   05:52:47 unittest.unittest_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.053   05:52:47 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x
00:07:27.053  ************************************
00:07:27.053  END TEST unittest_thread
00:07:27.053  ************************************
00:07:27.053   05:52:47 unittest -- unit/unittest.sh@261 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:27.053   05:52:47 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.053   05:52:47 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.053   05:52:47 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.053  ************************************
00:07:27.053  START TEST unittest_iobuf
00:07:27.053  ************************************
00:07:27.053   05:52:47 unittest.unittest_iobuf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:27.053  
00:07:27.053  
00:07:27.053       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.053       http://cunit.sourceforge.net/
00:07:27.053  
00:07:27.053  
00:07:27.053  Suite: io_channel
00:07:27.053    Test: iobuf ...passed
00:07:27.053    Test: iobuf_cache ...[2024-11-18 05:52:47.877952] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:27.053  [2024-11-18 05:52:47.878165] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:27.053  [2024-11-18 05:52:47.878290] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:07:27.053  [2024-11-18 05:52:47.878335] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:27.053  [2024-11-18 05:52:47.878418] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:27.053  [2024-11-18 05:52:47.878447] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:27.053  passed
00:07:27.054    Test: iobuf_priority ...passed
00:07:27.054  
00:07:27.054  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.054                suites      1      1    n/a      0        0
00:07:27.054                 tests      3      3      3      0        0
00:07:27.054               asserts    127    127    127      0      n/a
00:07:27.054  
00:07:27.054  Elapsed time =    0.009 seconds
00:07:27.054  
00:07:27.054  real	0m0.045s
00:07:27.054  user	0m0.023s
00:07:27.054  sys	0m0.022s
00:07:27.054   05:52:47 unittest.unittest_iobuf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.054   05:52:47 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x
00:07:27.054  ************************************
00:07:27.054  END TEST unittest_iobuf
00:07:27.054  ************************************
00:07:27.054   05:52:47 unittest -- unit/unittest.sh@262 -- # run_test unittest_util unittest_util
00:07:27.054   05:52:47 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.054   05:52:47 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.054   05:52:47 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.054  ************************************
00:07:27.054  START TEST unittest_util
00:07:27.054  ************************************
00:07:27.054   05:52:47 unittest.unittest_util -- common/autotest_common.sh@1129 -- # unittest_util
00:07:27.054   05:52:47 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:07:27.054  
00:07:27.054  
00:07:27.054       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.054       http://cunit.sourceforge.net/
00:07:27.054  
00:07:27.054  
00:07:27.054  Suite: base64
00:07:27.054    Test: test_base64_get_encoded_strlen ...passed
00:07:27.054    Test: test_base64_get_decoded_len ...passed
00:07:27.054    Test: test_base64_encode ...passed
00:07:27.054    Test: test_base64_decode ...passed
00:07:27.054    Test: test_base64_urlsafe_encode ...passed
00:07:27.054    Test: test_base64_urlsafe_decode ...passed
00:07:27.054  
00:07:27.054  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.054                suites      1      1    n/a      0        0
00:07:27.054                 tests      6      6      6      0        0
00:07:27.054               asserts    112    112    112      0      n/a
00:07:27.054  
00:07:27.054  Elapsed time =    0.000 seconds
00:07:27.054   05:52:47 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:07:27.054  
00:07:27.054  
00:07:27.054       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.054       http://cunit.sourceforge.net/
00:07:27.054  
00:07:27.054  
00:07:27.054  Suite: bit_array
00:07:27.054    Test: test_1bit ...passed
00:07:27.054    Test: test_64bit ...passed
00:07:27.054    Test: test_find ...passed
00:07:27.054    Test: test_resize ...passed
00:07:27.054    Test: test_errors ...passed
00:07:27.054    Test: test_count ...passed
00:07:27.054    Test: test_mask_store_load ...passed
00:07:27.054    Test: test_mask_clear ...passed
00:07:27.054  
00:07:27.054  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.054                suites      1      1    n/a      0        0
00:07:27.054                 tests      8      8      8      0        0
00:07:27.054               asserts   5075   5075   5075      0      n/a
00:07:27.054  
00:07:27.054  Elapsed time =    0.002 seconds
00:07:27.054   05:52:48 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:07:27.313  
00:07:27.313  
00:07:27.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.313       http://cunit.sourceforge.net/
00:07:27.313  
00:07:27.313  
00:07:27.313  Suite: cpuset
00:07:27.313    Test: test_cpuset ...passed
00:07:27.313    Test: test_cpuset_parse ...[2024-11-18 05:52:48.033359] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '['
00:07:27.313  [2024-11-18 05:52:48.033574] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:07:27.313  [2024-11-18 05:52:48.033616] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:07:27.313  [2024-11-18 05:52:48.033672] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:07:27.313  [2024-11-18 05:52:48.033725] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:07:27.313  [2024-11-18 05:52:48.033794] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:07:27.313  [2024-11-18 05:52:48.033838] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:07:27.313  [2024-11-18 05:52:48.033880] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:07:27.313  passed
00:07:27.313    Test: test_cpuset_fmt ...passed
00:07:27.313    Test: test_cpuset_foreach ...passed
00:07:27.313  
00:07:27.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.313                suites      1      1    n/a      0        0
00:07:27.313                 tests      4      4      4      0        0
00:07:27.313               asserts     90     90     90      0      n/a
00:07:27.313  
00:07:27.313  Elapsed time =    0.002 seconds
00:07:27.313   05:52:48 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:07:27.313  
00:07:27.313  
00:07:27.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.313       http://cunit.sourceforge.net/
00:07:27.313  
00:07:27.313  
00:07:27.313  Suite: crc16
00:07:27.313    Test: test_crc16_t10dif ...passed
00:07:27.313    Test: test_crc16_t10dif_seed ...passed
00:07:27.313    Test: test_crc16_t10dif_copy ...passed
00:07:27.313  
00:07:27.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.313                suites      1      1    n/a      0        0
00:07:27.313                 tests      3      3      3      0        0
00:07:27.313               asserts      5      5      5      0      n/a
00:07:27.313  
00:07:27.313  Elapsed time =    0.000 seconds
00:07:27.313   05:52:48 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:07:27.313  
00:07:27.313  
00:07:27.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.313       http://cunit.sourceforge.net/
00:07:27.313  
00:07:27.313  
00:07:27.313  Suite: crc32_ieee
00:07:27.313    Test: test_crc32_ieee ...passed
00:07:27.313  
00:07:27.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.313                suites      1      1    n/a      0        0
00:07:27.313                 tests      1      1      1      0        0
00:07:27.313               asserts      1      1      1      0      n/a
00:07:27.313  
00:07:27.313  Elapsed time =    0.000 seconds
00:07:27.313   05:52:48 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:07:27.313  
00:07:27.313  
00:07:27.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.313       http://cunit.sourceforge.net/
00:07:27.313  
00:07:27.313  
00:07:27.313  Suite: crc32c
00:07:27.313    Test: test_crc32c ...passed
00:07:27.313    Test: test_crc32c_nvme ...passed
00:07:27.313  
00:07:27.313  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.313                suites      1      1    n/a      0        0
00:07:27.313                 tests      2      2      2      0        0
00:07:27.313               asserts     16     16     16      0      n/a
00:07:27.313  
00:07:27.313  Elapsed time =    0.000 seconds
00:07:27.313   05:52:48 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:07:27.313  
00:07:27.313  
00:07:27.313       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.313       http://cunit.sourceforge.net/
00:07:27.313  
00:07:27.313  
00:07:27.313  Suite: crc64
00:07:27.314    Test: test_crc64_nvme ...passed
00:07:27.314  
00:07:27.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.314                suites      1      1    n/a      0        0
00:07:27.314                 tests      1      1      1      0        0
00:07:27.314               asserts      4      4      4      0      n/a
00:07:27.314  
00:07:27.314  Elapsed time =    0.000 seconds
00:07:27.314   05:52:48 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:07:27.314  
00:07:27.314  
00:07:27.314       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.314       http://cunit.sourceforge.net/
00:07:27.314  
00:07:27.314  
00:07:27.314  Suite: string
00:07:27.314    Test: test_parse_ip_addr ...passed
00:07:27.314    Test: test_str_chomp ...passed
00:07:27.314    Test: test_parse_capacity ...passed
00:07:27.314    Test: test_sprintf_append_realloc ...passed
00:07:27.314    Test: test_strtol ...passed
00:07:27.314    Test: test_strtoll ...passed
00:07:27.314    Test: test_strarray ...passed
00:07:27.314    Test: test_strcpy_replace ...passed
00:07:27.314  
00:07:27.314  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.314                suites      1      1    n/a      0        0
00:07:27.314                 tests      8      8      8      0        0
00:07:27.314               asserts    161    161    161      0      n/a
00:07:27.314  
00:07:27.314  Elapsed time =    0.001 seconds
00:07:27.314   05:52:48 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:07:27.314  
00:07:27.314  
00:07:27.314       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.314       http://cunit.sourceforge.net/
00:07:27.314  
00:07:27.314  
00:07:27.314  Suite: dif
00:07:27.314    Test: dif_generate_and_verify_test ...[2024-11-18 05:52:48.215061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:27.314  [2024-11-18 05:52:48.215537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:27.314  [2024-11-18 05:52:48.215874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:27.314  [2024-11-18 05:52:48.216159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:27.314  [2024-11-18 05:52:48.216461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:27.314  [2024-11-18 05:52:48.216744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:27.314  passed
00:07:27.314    Test: dif_disable_check_test ...[2024-11-18 05:52:48.217855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:27.314  [2024-11-18 05:52:48.218160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:27.314  [2024-11-18 05:52:48.218453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:27.314  passed
00:07:27.314    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-18 05:52:48.219539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:07:27.314  [2024-11-18 05:52:48.219901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:07:27.314  [2024-11-18 05:52:48.220224] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:07:27.314  [2024-11-18 05:52:48.220553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:07:27.314  [2024-11-18 05:52:48.220883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:27.314  [2024-11-18 05:52:48.221210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:27.314  [2024-11-18 05:52:48.221528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:27.314  [2024-11-18 05:52:48.221853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:27.314  [2024-11-18 05:52:48.222169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:27.314  [2024-11-18 05:52:48.222502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:27.314  [2024-11-18 05:52:48.222826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:27.314  passed
00:07:27.314    Test: dif_apptag_mask_test ...[2024-11-18 05:52:48.223162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:27.314  [2024-11-18 05:52:48.223471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:27.314  passed
00:07:27.314    Test: dif_sec_8_md_8_error_test ...passed
00:07:27.314    Test: dif_sec_512_md_0_error_test ...passed
00:07:27.314    Test: dif_sec_512_md_16_error_test ...passed[2024-11-18 05:52:48.223729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:27.314  [2024-11-18 05:52:48.223795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.314  [2024-11-18 05:52:48.223846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.314  [2024-11-18 05:52:48.223879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.314  
00:07:27.314    Test: dif_sec_4096_md_0_8_error_test ...[2024-11-18 05:52:48.223912] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.314  passed
00:07:27.314    Test: dif_sec_4100_md_128_error_test ...passed
00:07:27.314    Test: dif_guard_seed_test ...passed
00:07:27.314    Test: dif_guard_value_test ...[2024-11-18 05:52:48.223936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.314  [2024-11-18 05:52:48.223969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.314  [2024-11-18 05:52:48.223998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.314  [2024-11-18 05:52:48.224032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.314  [2024-11-18 05:52:48.224061] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.314  passed
00:07:27.314    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:27.314    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:27.314    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 05:52:48.269671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4d, Actual=fd4c
00:07:27.314  [2024-11-18 05:52:48.272196] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fe20, Actual=fe21
00:07:27.314  [2024-11-18 05:52:48.274727] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.314  [2024-11-18 05:52:48.277220] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.314  [2024-11-18 05:52:48.279697] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.314  [2024-11-18 05:52:48.282190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.314  [2024-11-18 05:52:48.284677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4c, Actual=f0a1
00:07:27.314  [2024-11-18 05:52:48.286460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fe21, Actual=416
00:07:27.314  [2024-11-18 05:52:48.288284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.575  [2024-11-18 05:52:48.290950] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=38564660, Actual=38574660
00:07:27.576  [2024-11-18 05:52:48.293566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.296177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.298700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.576  [2024-11-18 05:52:48.301202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.576  [2024-11-18 05:52:48.303677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab753ed, Actual=e8459fec
00:07:27.576  [2024-11-18 05:52:48.305510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=38574660, Actual=788ec34d
00:07:27.576  [2024-11-18 05:52:48.307294] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.576  [2024-11-18 05:52:48.309949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.576  [2024-11-18 05:52:48.312539] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.315042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.317517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.576  [2024-11-18 05:52:48.319984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.576  [2024-11-18 05:52:48.322495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.576  [2024-11-18 05:52:48.324266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.576  passed
00:07:27.576    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-18 05:52:48.325148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.576  [2024-11-18 05:52:48.325468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe20, Actual=fe21
00:07:27.576  [2024-11-18 05:52:48.325749] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.326071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.326392] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.326685] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.326999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.576  [2024-11-18 05:52:48.327232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=416
00:07:27.576  [2024-11-18 05:52:48.327473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.576  [2024-11-18 05:52:48.327819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38564660, Actual=38574660
00:07:27.576  [2024-11-18 05:52:48.328115] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.328447] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.328782] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.329069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.329369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.576  [2024-11-18 05:52:48.329575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=788ec34d
00:07:27.576  [2024-11-18 05:52:48.329829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.576  [2024-11-18 05:52:48.330118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.576  [2024-11-18 05:52:48.330425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.330709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.331032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.576  [2024-11-18 05:52:48.331331] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.576  [2024-11-18 05:52:48.331623] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.576  [2024-11-18 05:52:48.331858] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.576  passed
00:07:27.576    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-18 05:52:48.332167] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.576  [2024-11-18 05:52:48.332499] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe20, Actual=fe21
00:07:27.576  [2024-11-18 05:52:48.332851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.333168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.333469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.333793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.334082] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.576  [2024-11-18 05:52:48.334297] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=416
00:07:27.576  [2024-11-18 05:52:48.334533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.576  [2024-11-18 05:52:48.334852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38564660, Actual=38574660
00:07:27.576  [2024-11-18 05:52:48.335169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.335456] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.335794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.336101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.336422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.576  [2024-11-18 05:52:48.336650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=788ec34d
00:07:27.576  [2024-11-18 05:52:48.336900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.576  [2024-11-18 05:52:48.337204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.576  [2024-11-18 05:52:48.337523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.337847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.338154] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.576  [2024-11-18 05:52:48.338449] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.576  [2024-11-18 05:52:48.338777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.576  passed
00:07:27.576    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-18 05:52:48.339010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.576  [2024-11-18 05:52:48.339308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.576  [2024-11-18 05:52:48.339618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe20, Actual=fe21
00:07:27.576  [2024-11-18 05:52:48.339931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.340239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.340569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.340874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.576  [2024-11-18 05:52:48.341239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.576  [2024-11-18 05:52:48.341512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=416
00:07:27.576  [2024-11-18 05:52:48.341754] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.576  [2024-11-18 05:52:48.342050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38564660, Actual=38574660
00:07:27.576  [2024-11-18 05:52:48.342355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.342651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.576  [2024-11-18 05:52:48.342974] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.343276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.576  [2024-11-18 05:52:48.343570] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.576  [2024-11-18 05:52:48.343804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=788ec34d
00:07:27.576  [2024-11-18 05:52:48.344031] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.576  [2024-11-18 05:52:48.344349] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.576  [2024-11-18 05:52:48.344654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.344961] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.345284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.345580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.345902] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.577  passed
00:07:27.577    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-18 05:52:48.346131] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.577  [2024-11-18 05:52:48.346405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.577  [2024-11-18 05:52:48.346714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe20, Actual=fe21
00:07:27.577  [2024-11-18 05:52:48.347046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.347337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.347640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.577  [2024-11-18 05:52:48.347973] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.577  [2024-11-18 05:52:48.348258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.577  [2024-11-18 05:52:48.348506] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=416
00:07:27.577  passed
00:07:27.577    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-18 05:52:48.348793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.577  [2024-11-18 05:52:48.349087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38564660, Actual=38574660
00:07:27.577  [2024-11-18 05:52:48.349416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.349723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.350065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.577  [2024-11-18 05:52:48.350347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.577  [2024-11-18 05:52:48.350663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.577  [2024-11-18 05:52:48.350918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=788ec34d
00:07:27.577  [2024-11-18 05:52:48.351191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.577  [2024-11-18 05:52:48.351477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.577  [2024-11-18 05:52:48.351807] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.352102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.352460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.352793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.353102] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.577  [2024-11-18 05:52:48.353344] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.577  passed
00:07:27.577    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-18 05:52:48.353610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.577  [2024-11-18 05:52:48.353909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe20, Actual=fe21
00:07:27.577  [2024-11-18 05:52:48.354217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.354524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.354848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.577  [2024-11-18 05:52:48.355147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.577  [2024-11-18 05:52:48.355458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.577  [2024-11-18 05:52:48.355688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=416
00:07:27.577  passed
00:07:27.577    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-18 05:52:48.355988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.577  [2024-11-18 05:52:48.356291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38564660, Actual=38574660
00:07:27.577  [2024-11-18 05:52:48.356613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.356917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.357207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.577  [2024-11-18 05:52:48.357502] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.577  [2024-11-18 05:52:48.357828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.577  [2024-11-18 05:52:48.358055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=788ec34d
00:07:27.577  [2024-11-18 05:52:48.358347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.577  [2024-11-18 05:52:48.358630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88000a2d4837a266, Actual=88010a2d4837a266
00:07:27.577  [2024-11-18 05:52:48.358951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.359252] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.359550] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.359875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.577  [2024-11-18 05:52:48.360187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.577  [2024-11-18 05:52:48.360414] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=a453e0d9fd8440c5
00:07:27.577  passed
00:07:27.577    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_dif_disable_single_iov ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_prchk_7_multi_bounce_iovs_complex_splits ...passed
00:07:27.577    Test: dif_copy_sec_512_md_8_dif_disable_multi_bounce_iovs_complex_splits ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:27.577    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 05:52:48.418416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4d, Actual=fd4c
00:07:27.577  [2024-11-18 05:52:48.419569] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=f3dc, Actual=f3dd
00:07:27.577  [2024-11-18 05:52:48.420741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.421896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.423008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.577  [2024-11-18 05:52:48.424133] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.577  [2024-11-18 05:52:48.425248] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4c, Actual=f0a1
00:07:27.577  [2024-11-18 05:52:48.426358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=4e97, Actual=b4a0
00:07:27.577  [2024-11-18 05:52:48.427475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.577  [2024-11-18 05:52:48.428625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=6b0ec113, Actual=6b0fc113
00:07:27.577  [2024-11-18 05:52:48.429810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.577  [2024-11-18 05:52:48.430929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.432065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.578  [2024-11-18 05:52:48.433190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.578  [2024-11-18 05:52:48.434330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab753ed, Actual=e8459fec
00:07:27.578  [2024-11-18 05:52:48.435452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=2b267559, Actual=6bfff074
00:07:27.578  [2024-11-18 05:52:48.436581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.578  [2024-11-18 05:52:48.437726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=9a8d320703cfbeac, Actual=9a8c320703cfbeac
00:07:27.578  [2024-11-18 05:52:48.438890] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.439995] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.441128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.578  [2024-11-18 05:52:48.442260] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.578  [2024-11-18 05:52:48.443375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.578  passed
00:07:27.578    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 05:52:48.444491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=cd8425887035b4fd, Actual=e1d6cf7cc586565e
00:07:27.578  [2024-11-18 05:52:48.444871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.578  [2024-11-18 05:52:48.445141] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=905d, Actual=905c
00:07:27.578  [2024-11-18 05:52:48.445409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.445670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.445970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.578  [2024-11-18 05:52:48.446263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.578  [2024-11-18 05:52:48.446541] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.578  [2024-11-18 05:52:48.446835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=d721
00:07:27.578  [2024-11-18 05:52:48.447107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.578  [2024-11-18 05:52:48.447380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=37e228c5, Actual=37e328c5
00:07:27.578  [2024-11-18 05:52:48.447670] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.447952] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.448235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.578  [2024-11-18 05:52:48.448520] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.578  [2024-11-18 05:52:48.448809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.578  [2024-11-18 05:52:48.449074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=77ca9c8f, Actual=371319a2
00:07:27.578  [2024-11-18 05:52:48.449346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.578  [2024-11-18 05:52:48.449622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3
00:07:27.578  [2024-11-18 05:52:48.449893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.450159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.450416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.578  [2024-11-18 05:52:48.450678] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.578  [2024-11-18 05:52:48.450982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.578  passed
00:07:27.578    Test: dix_sec_0_md_8_error ...passed[2024-11-18 05:52:48.451256] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=1434c0efe369d201
00:07:27.578  [2024-11-18 05:52:48.451325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:27.578  
00:07:27.578    Test: dix_sec_512_md_0_error ...passed
00:07:27.578    Test: dix_sec_512_md_16_error ...passed
00:07:27.578    Test: dix_sec_4096_md_0_8_error ...[2024-11-18 05:52:48.451356] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.578  [2024-11-18 05:52:48.451393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.578  [2024-11-18 05:52:48.451417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:27.578  [2024-11-18 05:52:48.451460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.578  [2024-11-18 05:52:48.451493] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.578  passed
00:07:27.578    Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-11-18 05:52:48.451519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.578  [2024-11-18 05:52:48.451534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:27.578  passed
00:07:27.578    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:27.578    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:27.578    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:27.578    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:27.578    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:27.578    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:27.578    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:27.578    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:27.578    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-18 05:52:48.495265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4d, Actual=fd4c
00:07:27.578  [2024-11-18 05:52:48.496407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=f3dc, Actual=f3dd
00:07:27.578  [2024-11-18 05:52:48.497530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.498651] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.499786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.578  [2024-11-18 05:52:48.500875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=1005c
00:07:27.578  [2024-11-18 05:52:48.501978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=fd4c, Actual=f0a1
00:07:27.578  [2024-11-18 05:52:48.503069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=4e97, Actual=b4a0
00:07:27.578  [2024-11-18 05:52:48.504162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.578  [2024-11-18 05:52:48.505278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=6b0ec113, Actual=6b0fc113
00:07:27.578  [2024-11-18 05:52:48.506369] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.507469] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.578  [2024-11-18 05:52:48.508586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.579  [2024-11-18 05:52:48.509680] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=100000000005c
00:07:27.579  [2024-11-18 05:52:48.510789] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=1ab753ed, Actual=e8459fec
00:07:27.579  [2024-11-18 05:52:48.511884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=2b267559, Actual=6bfff074
00:07:27.579  [2024-11-18 05:52:48.512980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.579  [2024-11-18 05:52:48.514097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=9a8d320703cfbeac, Actual=9a8c320703cfbeac
00:07:27.579  [2024-11-18 05:52:48.515199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.516311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=92,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.517415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.579  [2024-11-18 05:52:48.518504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=92, Expected=5c, Actual=10000005c
00:07:27.579  [2024-11-18 05:52:48.519614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.579  passed
00:07:27.579    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-18 05:52:48.520718] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=92,  Expected=cd8425887035b4fd, Actual=e1d6cf7cc586565e
00:07:27.579  [2024-11-18 05:52:48.521065] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4d, Actual=fd4c
00:07:27.579  [2024-11-18 05:52:48.521321] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=905d, Actual=905c
00:07:27.579  [2024-11-18 05:52:48.521597] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.521864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.522129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.579  [2024-11-18 05:52:48.522394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10058
00:07:27.579  [2024-11-18 05:52:48.522663] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=f0a1
00:07:27.579  [2024-11-18 05:52:48.522937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=d721
00:07:27.579  [2024-11-18 05:52:48.523203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab653ed, Actual=1ab753ed
00:07:27.579  [2024-11-18 05:52:48.523448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=aa8ef4e6, Actual=aa8ff4e6
00:07:27.579  [2024-11-18 05:52:48.523716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.523975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.524247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.579  [2024-11-18 05:52:48.524511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=1000000000058
00:07:27.579  [2024-11-18 05:52:48.524791] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=e8459fec
00:07:27.579  [2024-11-18 05:52:48.525041] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=aa7fc581
00:07:27.579  [2024-11-18 05:52:48.525299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a577a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:27.579  [2024-11-18 05:52:48.525552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=6f6f3d9425203af3, Actual=6f6e3d9425203af3
00:07:27.579  [2024-11-18 05:52:48.525842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.526107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=89
00:07:27.579  [2024-11-18 05:52:48.526358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.579  [2024-11-18 05:52:48.526619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100000058
00:07:27.579  [2024-11-18 05:52:48.526903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=ebbb83d2cb8dce2e
00:07:27.579  [2024-11-18 05:52:48.527182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=1434c0efe369d201
00:07:27.579  passed
00:07:27.579    Test: set_md_interleave_iovs_test ...passed
00:07:27.579    Test: set_md_interleave_iovs_split_test ...passed
00:07:27.579    Test: dif_generate_stream_pi_16_test ...passed
00:07:27.579    Test: dif_generate_stream_test ...passed
00:07:27.579    Test: set_md_interleave_iovs_alignment_test ...passed
00:07:27.579    Test: dif_generate_split_test ...[2024-11-18 05:52:48.534823] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1946:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:07:27.579  passed
00:07:27.579    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:07:27.579    Test: dif_verify_split_test ...passed
00:07:27.579    Test: dif_verify_stream_multi_segments_test ...passed
00:07:27.579    Test: update_crc32c_pi_16_test ...passed
00:07:27.579    Test: update_crc32c_test ...passed
00:07:27.579    Test: dif_update_crc32c_split_test ...passed
00:07:27.579    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:07:27.579    Test: get_range_with_md_test ...passed
00:07:27.579    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:07:27.839    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:07:27.839    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:27.839    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:07:27.839    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:07:27.839    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:27.839    Test: dif_generate_and_verify_unmap_test ...passed
00:07:27.839    Test: dif_pi_format_check_test ...passed
00:07:27.839    Test: dif_type_check_test ...passed
00:07:27.839  
00:07:27.839  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.839                suites      1      1    n/a      0        0
00:07:27.839                 tests     90     90     90      0        0
00:07:27.839               asserts   3705   3705   3705      0      n/a
00:07:27.839  
00:07:27.839  Elapsed time =    0.366 seconds
00:07:27.839   05:52:48 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:07:27.839  
00:07:27.839  
00:07:27.839       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.839       http://cunit.sourceforge.net/
00:07:27.839  
00:07:27.839  
00:07:27.839  Suite: iov
00:07:27.839    Test: test_single_iov ...passed
00:07:27.839    Test: test_simple_iov ...passed
00:07:27.839    Test: test_complex_iov ...passed
00:07:27.839    Test: test_iovs_to_buf ...passed
00:07:27.839    Test: test_buf_to_iovs ...passed
00:07:27.839    Test: test_memset ...passed
00:07:27.839    Test: test_iov_one ...passed
00:07:27.839    Test: test_iov_xfer ...passed
00:07:27.839  
00:07:27.839  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.839                suites      1      1    n/a      0        0
00:07:27.839                 tests      8      8      8      0        0
00:07:27.839               asserts    156    156    156      0      n/a
00:07:27.839  
00:07:27.839  Elapsed time =    0.000 seconds
00:07:27.839   05:52:48 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:07:27.839  
00:07:27.839  
00:07:27.839       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.839       http://cunit.sourceforge.net/
00:07:27.839  
00:07:27.839  
00:07:27.839  Suite: math
00:07:27.839    Test: test_serial_number_arithmetic ...passed
00:07:27.839  Suite: erase
00:07:27.839    Test: test_memset_s ...passed
00:07:27.839  
00:07:27.839  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.839                suites      2      2    n/a      0        0
00:07:27.839                 tests      2      2      2      0        0
00:07:27.839               asserts     18     18     18      0      n/a
00:07:27.839  
00:07:27.839  Elapsed time =    0.000 seconds
00:07:27.839   05:52:48 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:07:27.839  
00:07:27.839  
00:07:27.839       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.839       http://cunit.sourceforge.net/
00:07:27.839  
00:07:27.839  
00:07:27.839  Suite: pipe
00:07:27.839    Test: test_create_destroy ...passed
00:07:27.839    Test: test_write_get_buffer ...passed
00:07:27.839    Test: test_write_advance ...passed
00:07:27.839    Test: test_read_get_buffer ...passed
00:07:27.839    Test: test_read_advance ...passed
00:07:27.839    Test: test_data ...passed
00:07:27.839  
00:07:27.839  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.839                suites      1      1    n/a      0        0
00:07:27.839                 tests      6      6      6      0        0
00:07:27.839               asserts    251    251    251      0      n/a
00:07:27.839  
00:07:27.839  Elapsed time =    0.000 seconds
00:07:27.839    05:52:48 unittest.unittest_util -- unit/unittest.sh@146 -- # uname -s
00:07:27.839   05:52:48 unittest.unittest_util -- unit/unittest.sh@146 -- # '[' Linux = Linux ']'
00:07:27.839   05:52:48 unittest.unittest_util -- unit/unittest.sh@147 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/fd_group.c/fd_group_ut
00:07:27.839  
00:07:27.839  
00:07:27.839       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.839       http://cunit.sourceforge.net/
00:07:27.839  
00:07:27.839  
00:07:27.839  Suite: fd_group
00:07:27.839    Test: test_fd_group_basic ...passed
00:07:27.839    Test: test_fd_group_nest_unnest ...passed
00:07:27.839  
00:07:27.839  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.839                suites      1      1    n/a      0        0
00:07:27.839                 tests      2      2      2      0        0
00:07:27.839               asserts     41     41     41      0      n/a
00:07:27.839  
00:07:27.839  Elapsed time =    0.000 seconds
00:07:27.839  
00:07:27.839  real	0m0.766s
00:07:27.839  user	0m0.544s
00:07:27.839  sys	0m0.228s
00:07:27.839   05:52:48 unittest.unittest_util -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:27.839   05:52:48 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x
00:07:27.839  ************************************
00:07:27.839  END TEST unittest_util
00:07:27.839  ************************************
00:07:27.839   05:52:48 unittest -- unit/unittest.sh@263 -- # [[ y == y ]]
00:07:27.839   05:52:48 unittest -- unit/unittest.sh@264 -- # run_test unittest_fsdev unittest_fsdev
00:07:27.839   05:52:48 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:27.839   05:52:48 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:27.839   05:52:48 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:27.839  ************************************
00:07:27.839  START TEST unittest_fsdev
00:07:27.839  ************************************
00:07:27.839   05:52:48 unittest.unittest_fsdev -- common/autotest_common.sh@1129 -- # unittest_fsdev
00:07:27.839   05:52:48 unittest.unittest_fsdev -- unit/unittest.sh@152 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/fsdev/fsdev.c/fsdev_ut
00:07:27.839  
00:07:27.839  
00:07:27.839       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.839       http://cunit.sourceforge.net/
00:07:27.839  
00:07:27.839  
00:07:27.839  Suite: fsdev
00:07:27.839    Test: ut_fsdev_test_open_close ...passed
00:07:27.839    Test: ut_fsdev_test_set_opts ...passed
00:07:27.839    Test: ut_fsdev_test_get_io_channel ...[2024-11-18 05:52:48.789393] fsdev.c: 631:spdk_fsdev_set_opts: *ERROR*: opts cannot be NULL
00:07:27.839  [2024-11-18 05:52:48.789615] fsdev.c: 636:spdk_fsdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:27.839  passed
00:07:27.839    Test: ut_fsdev_test_mount_ok ...passed
00:07:27.839    Test: ut_fsdev_test_mount_err ...passed
00:07:27.839    Test: ut_fsdev_test_umount ...passed
00:07:27.839    Test: ut_fsdev_test_lookup_ok ...passed
00:07:27.839    Test: ut_fsdev_test_lookup_err ...passed
00:07:27.839    Test: ut_fsdev_test_forget ...passed
00:07:27.839    Test: ut_fsdev_test_getattr ...passed
00:07:27.839    Test: ut_fsdev_test_setattr ...passed
00:07:27.839    Test: ut_fsdev_test_readlink ...passed
00:07:27.839    Test: ut_fsdev_test_symlink ...passed
00:07:27.839    Test: ut_fsdev_test_mknod ...passed
00:07:27.839    Test: ut_fsdev_test_mkdir ...passed
00:07:27.839    Test: ut_fsdev_test_unlink ...passed
00:07:27.839    Test: ut_fsdev_test_rmdir ...passed
00:07:27.839    Test: ut_fsdev_test_rename ...passed
00:07:27.839    Test: ut_fsdev_test_link ...passed
00:07:27.839    Test: ut_fsdev_test_fopen ...passed
00:07:27.839    Test: ut_fsdev_test_read ...passed
00:07:27.839    Test: ut_fsdev_test_write ...passed
00:07:28.100    Test: ut_fsdev_test_statfs ...passed
00:07:28.100    Test: ut_fsdev_test_release ...passed
00:07:28.100    Test: ut_fsdev_test_fsync ...passed
00:07:28.100    Test: ut_fsdev_test_getxattr ...passed
00:07:28.100    Test: ut_fsdev_test_setxattr ...passed
00:07:28.100    Test: ut_fsdev_test_listxattr ...passed
00:07:28.100    Test: ut_fsdev_test_listxattr_get_size ...passed
00:07:28.100    Test: ut_fsdev_test_removexattr ...passed
00:07:28.100    Test: ut_fsdev_test_flush ...passed
00:07:28.100    Test: ut_fsdev_test_opendir ...passed
00:07:28.100    Test: ut_fsdev_test_readdir ...passed
00:07:28.100    Test: ut_fsdev_test_releasedir ...passed
00:07:28.100    Test: ut_fsdev_test_fsyncdir ...passed
00:07:28.100    Test: ut_fsdev_test_flock ...passed
00:07:28.100    Test: ut_fsdev_test_create ...passed
00:07:28.100    Test: ut_fsdev_test_abort ...passed
00:07:28.100    Test: ut_fsdev_test_fallocate ...passed
00:07:28.100    Test: ut_fsdev_test_copy_file_range ...passed
00:07:28.100  
00:07:28.100  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.100                suites      1      1    n/a      0        0
00:07:28.100                 tests     40     40     40      0        0
00:07:28.100               asserts   2840   2840   2840      0      n/a
00:07:28.100  
00:07:28.100  Elapsed time =    0.051 seconds
00:07:28.100  [2024-11-18 05:52:48.839377] fsdev.c: 354:fsdev_mgr_unregister_cb: *ERROR*: fsdev IO pool count is 65535 but should be 131070
00:07:28.100  
00:07:28.100  real	0m0.097s
00:07:28.100  user	0m0.062s
00:07:28.100  sys	0m0.035s
00:07:28.100   05:52:48 unittest.unittest_fsdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.100  ************************************
00:07:28.100  END TEST unittest_fsdev
00:07:28.100  ************************************
00:07:28.100   05:52:48 unittest.unittest_fsdev -- common/autotest_common.sh@10 -- # set +x
00:07:28.100   05:52:48 unittest -- unit/unittest.sh@266 -- # [[ y == y ]]
00:07:28.100   05:52:48 unittest -- unit/unittest.sh@267 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:28.100   05:52:48 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.100   05:52:48 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.100   05:52:48 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.100  ************************************
00:07:28.100  START TEST unittest_vhost
00:07:28.100  ************************************
00:07:28.100   05:52:48 unittest.unittest_vhost -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:28.100  
00:07:28.100  
00:07:28.100       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.100       http://cunit.sourceforge.net/
00:07:28.100  
00:07:28.100  
00:07:28.100  Suite: vhost_suite
00:07:28.100    Test: desc_to_iov_test ...[2024-11-18 05:52:48.941611] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached
00:07:28.100  passed
00:07:28.100    Test: create_controller_test ...[2024-11-18 05:52:48.946982] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:28.100  [2024-11-18 05:52:48.947106] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:07:28.100  [2024-11-18 05:52:48.947251] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:28.100  [2024-11-18 05:52:48.947344] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:07:28.100  [2024-11-18 05:52:48.947388] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 125:vhost_dev_register: *ERROR*: Can't register controller with no name
00:07:28.101  [2024-11-18 05:52:48.947884] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is too long: some_path/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
00:07:28.101  [2024-11-18 05:52:48.949199] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 141:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists.
00:07:28.101  passed
00:07:28.101    Test: session_find_by_vid_test ...passed
00:07:28.101    Test: remove_controller_test ...[2024-11-18 05:52:48.951716] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1869:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection.
00:07:28.101  passed
00:07:28.101    Test: vq_avail_ring_get_test ...passed
00:07:28.101    Test: vq_packed_ring_test ...passed
00:07:28.101    Test: vhost_blk_construct_test ...passed
00:07:28.101  
00:07:28.101  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.101                suites      1      1    n/a      0        0
00:07:28.101                 tests      7      7      7      0        0
00:07:28.101               asserts    147    147    147      0      n/a
00:07:28.101  
00:07:28.101  Elapsed time =    0.015 seconds
00:07:28.101  
00:07:28.101  real	0m0.056s
00:07:28.101  user	0m0.033s
00:07:28.101  sys	0m0.023s
00:07:28.101   05:52:48 unittest.unittest_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.101  ************************************
00:07:28.101  END TEST unittest_vhost
00:07:28.101  ************************************
00:07:28.101   05:52:48 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x
00:07:28.101   05:52:49 unittest -- unit/unittest.sh@269 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:28.101   05:52:49 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.101   05:52:49 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.101   05:52:49 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.101  ************************************
00:07:28.101  START TEST unittest_dma
00:07:28.101  ************************************
00:07:28.101   05:52:49 unittest.unittest_dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:28.101  
00:07:28.101  
00:07:28.101       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.101       http://cunit.sourceforge.net/
00:07:28.101  
00:07:28.101  
00:07:28.101  Suite: dma_suite
00:07:28.101    Test: test_dma ...passed
00:07:28.101  
00:07:28.101  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.101                suites      1      1    n/a      0        0
00:07:28.101                 tests      1      1      1      0        0
00:07:28.101               asserts     54     54     54      0      n/a
00:07:28.101  
00:07:28.101  Elapsed time =    0.000 seconds
00:07:28.101  [2024-11-18 05:52:49.048287] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  60:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:07:28.101  
00:07:28.101  real	0m0.033s
00:07:28.101  user	0m0.019s
00:07:28.101  sys	0m0.014s
00:07:28.101   05:52:49 unittest.unittest_dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.101  ************************************
00:07:28.101  END TEST unittest_dma
00:07:28.101  ************************************
00:07:28.101   05:52:49 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x
00:07:28.361   05:52:49 unittest -- unit/unittest.sh@271 -- # run_test unittest_init unittest_init
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.361  ************************************
00:07:28.361  START TEST unittest_init
00:07:28.361  ************************************
00:07:28.361   05:52:49 unittest.unittest_init -- common/autotest_common.sh@1129 -- # unittest_init
00:07:28.361   05:52:49 unittest.unittest_init -- unit/unittest.sh@156 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:07:28.361  
00:07:28.361  
00:07:28.361       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.361       http://cunit.sourceforge.net/
00:07:28.361  
00:07:28.361  
00:07:28.361  Suite: subsystem_suite
00:07:28.361    Test: subsystem_sort_test_depends_on_single ...passed
00:07:28.361    Test: subsystem_sort_test_depends_on_multiple ...passed
00:07:28.361    Test: subsystem_sort_test_missing_dependency ...passed
00:07:28.361  
00:07:28.361  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.361                suites      1      1    n/a      0        0
00:07:28.361                 tests      3      3      3      0        0
00:07:28.361               asserts     20     20     20      0      n/a
00:07:28.361  
00:07:28.361  Elapsed time =    0.000 seconds
00:07:28.361  [2024-11-18 05:52:49.140595] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:07:28.361  [2024-11-18 05:52:49.140897] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:07:28.361  
00:07:28.361  real	0m0.041s
00:07:28.361  user	0m0.020s
00:07:28.361  sys	0m0.022s
00:07:28.361   05:52:49 unittest.unittest_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.361   05:52:49 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x
00:07:28.361  ************************************
00:07:28.361  END TEST unittest_init
00:07:28.361  ************************************
00:07:28.361   05:52:49 unittest -- unit/unittest.sh@272 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.361   05:52:49 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.361  ************************************
00:07:28.361  START TEST unittest_keyring
00:07:28.361  ************************************
00:07:28.361   05:52:49 unittest.unittest_keyring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:28.361  
00:07:28.361  
00:07:28.361       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.361       http://cunit.sourceforge.net/
00:07:28.361  
00:07:28.361  
00:07:28.361  Suite: keyring
00:07:28.361    Test: test_keyring_add_remove ...[2024-11-18 05:52:49.231454] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists
00:07:28.361  [2024-11-18 05:52:49.231682] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists
00:07:28.361  passed
00:07:28.361    Test: test_keyring_get_put ...[2024-11-18 05:52:49.231726] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 168:spdk_keyring_remove_key: *ERROR*: Key 'key0' is not owned by module 'ut2'
00:07:28.361  [2024-11-18 05:52:49.231779] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key 'key0' does not exist
00:07:28.361  [2024-11-18 05:52:49.231816] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key ':key0' does not exist
00:07:28.361  [2024-11-18 05:52:49.231854] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:07:28.361  passed
00:07:28.361  
00:07:28.361  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.361                suites      1      1    n/a      0        0
00:07:28.361                 tests      2      2      2      0        0
00:07:28.361               asserts     46     46     46      0      n/a
00:07:28.361  
00:07:28.361  Elapsed time =    0.001 seconds
00:07:28.361  
00:07:28.361  real	0m0.031s
00:07:28.361  user	0m0.016s
00:07:28.361  sys	0m0.015s
00:07:28.361   05:52:49 unittest.unittest_keyring -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.361  ************************************
00:07:28.361  END TEST unittest_keyring
00:07:28.361  ************************************
00:07:28.361   05:52:49 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x
00:07:28.361   05:52:49 unittest -- unit/unittest.sh@274 -- # [[ y == y ]]
00:07:28.361    05:52:49 unittest -- unit/unittest.sh@275 -- # hostname
00:07:28.361   05:52:49 unittest -- unit/unittest.sh@275 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:07:28.620  geninfo: WARNING: invalid characters removed from testname!
00:08:07.341   05:53:24 unittest -- unit/unittest.sh@276 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:08:09.242   05:53:30 unittest -- unit/unittest.sh@277 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:12.528   05:53:33 unittest -- unit/unittest.sh@278 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:15.816   05:53:36 unittest -- unit/unittest.sh@279 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:19.104   05:53:39 unittest -- unit/unittest.sh@280 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:21.639   05:53:42 unittest -- unit/unittest.sh@281 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:24.929   05:53:45 unittest -- unit/unittest.sh@282 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:08:24.929   05:53:45 unittest -- unit/unittest.sh@283 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:25.498  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:25.498  Found 338 entries.
00:08:25.498  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:08:25.498  Writing .css and .png files.
00:08:25.498  Generating output.
00:08:25.498  Processing file include/linux/virtio_ring.h
00:08:25.760  Processing file include/spdk/mmio.h
00:08:25.760  Processing file include/spdk/base64.h
00:08:25.760  Processing file include/spdk/endian.h
00:08:25.760  Processing file include/spdk/fsdev_module.h
00:08:25.760  Processing file include/spdk/nvme.h
00:08:25.760  Processing file include/spdk/nvme_spec.h
00:08:25.760  Processing file include/spdk/bdev_module.h
00:08:25.760  Processing file include/spdk/nvmf_transport.h
00:08:25.760  Processing file include/spdk/histogram_data.h
00:08:25.760  Processing file include/spdk/util.h
00:08:25.760  Processing file include/spdk/trace.h
00:08:25.760  Processing file include/spdk/thread.h
00:08:26.021  Processing file include/spdk_internal/rdma_utils.h
00:08:26.021  Processing file include/spdk_internal/nvme_tcp.h
00:08:26.021  Processing file include/spdk_internal/sgl.h
00:08:26.021  Processing file include/spdk_internal/virtio.h
00:08:26.021  Processing file include/spdk_internal/sock.h
00:08:26.021  Processing file include/spdk_internal/utf.h
00:08:26.021  Processing file lib/accel/accel.c
00:08:26.021  Processing file lib/accel/accel_sw.c
00:08:26.021  Processing file lib/accel/accel_rpc.c
00:08:26.280  Processing file lib/bdev/part.c
00:08:26.280  Processing file lib/bdev/scsi_nvme.c
00:08:26.280  Processing file lib/bdev/bdev_rpc.c
00:08:26.280  Processing file lib/bdev/bdev_zone.c
00:08:26.280  Processing file lib/bdev/bdev.c
00:08:26.538  Processing file lib/blob/request.c
00:08:26.538  Processing file lib/blob/blob_bs_dev.c
00:08:26.538  Processing file lib/blob/blobstore.c
00:08:26.538  Processing file lib/blob/zeroes.c
00:08:26.538  Processing file lib/blob/blobstore.h
00:08:26.798  Processing file lib/blobfs/blobfs.c
00:08:26.798  Processing file lib/blobfs/tree.c
00:08:26.798  Processing file lib/conf/conf.c
00:08:26.798  Processing file lib/dma/dma.c
00:08:27.056  Processing file lib/env_dpdk/env.c
00:08:27.056  Processing file lib/env_dpdk/pci_virtio.c
00:08:27.056  Processing file lib/env_dpdk/sigbus_handler.c
00:08:27.056  Processing file lib/env_dpdk/init.c
00:08:27.056  Processing file lib/env_dpdk/memory.c
00:08:27.056  Processing file lib/env_dpdk/pci_event.c
00:08:27.056  Processing file lib/env_dpdk/pci_ioat.c
00:08:27.056  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:08:27.056  Processing file lib/env_dpdk/pci_dpdk.c
00:08:27.056  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:08:27.056  Processing file lib/env_dpdk/pci_idxd.c
00:08:27.056  Processing file lib/env_dpdk/pci.c
00:08:27.056  Processing file lib/env_dpdk/pci_vmd.c
00:08:27.056  Processing file lib/env_dpdk/threads.c
00:08:27.315  Processing file lib/event/log_rpc.c
00:08:27.315  Processing file lib/event/app.c
00:08:27.315  Processing file lib/event/scheduler_static.c
00:08:27.315  Processing file lib/event/reactor.c
00:08:27.315  Processing file lib/event/app_rpc.c
00:08:27.315  Processing file lib/fsdev/fsdev_io.c
00:08:27.315  Processing file lib/fsdev/fsdev.c
00:08:27.315  Processing file lib/fsdev/fsdev_rpc.c
00:08:27.881  Processing file lib/ftl/ftl_debug.c
00:08:27.881  Processing file lib/ftl/ftl_core.h
00:08:27.881  Processing file lib/ftl/ftl_core.c
00:08:27.881  Processing file lib/ftl/ftl_nv_cache_io.h
00:08:27.881  Processing file lib/ftl/ftl_nv_cache.c
00:08:27.881  Processing file lib/ftl/ftl_band.c
00:08:27.881  Processing file lib/ftl/ftl_p2l_log.c
00:08:27.881  Processing file lib/ftl/ftl_rq.c
00:08:27.881  Processing file lib/ftl/ftl_reloc.c
00:08:27.881  Processing file lib/ftl/ftl_writer.h
00:08:27.881  Processing file lib/ftl/ftl_layout.c
00:08:27.881  Processing file lib/ftl/ftl_debug.h
00:08:27.881  Processing file lib/ftl/ftl_writer.c
00:08:27.881  Processing file lib/ftl/ftl_io.c
00:08:27.881  Processing file lib/ftl/ftl_trace.c
00:08:27.881  Processing file lib/ftl/ftl_band_ops.c
00:08:27.881  Processing file lib/ftl/ftl_band.h
00:08:27.881  Processing file lib/ftl/ftl_io.h
00:08:27.881  Processing file lib/ftl/ftl_p2l.c
00:08:27.881  Processing file lib/ftl/ftl_l2p_cache.c
00:08:27.881  Processing file lib/ftl/ftl_nv_cache.h
00:08:27.881  Processing file lib/ftl/ftl_sb.c
00:08:27.881  Processing file lib/ftl/ftl_init.c
00:08:27.881  Processing file lib/ftl/ftl_l2p.c
00:08:27.881  Processing file lib/ftl/ftl_l2p_flat.c
00:08:27.881  Processing file lib/ftl/base/ftl_base_dev.c
00:08:27.881  Processing file lib/ftl/base/ftl_base_bdev.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_misc.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_bdev.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_recovery.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_l2p.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_ioch.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_upgrade.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_p2l.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_self_test.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_band.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_shutdown.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_startup.c
00:08:28.140  Processing file lib/ftl/mngt/ftl_mngt_md.c
00:08:28.140  Processing file lib/ftl/nvc/ftl_nvc_bdev_non_vss.c
00:08:28.140  Processing file lib/ftl/nvc/ftl_nvc_bdev_common.c
00:08:28.140  Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c
00:08:28.140  Processing file lib/ftl/nvc/ftl_nvc_dev.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_sb_upgrade.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_layout_upgrade.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_band_upgrade.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_sb_v3.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_sb_v5.c
00:08:28.407  Processing file lib/ftl/upgrade/ftl_trim_upgrade.c
00:08:28.665  Processing file lib/ftl/utils/ftl_property.h
00:08:28.665  Processing file lib/ftl/utils/ftl_conf.c
00:08:28.665  Processing file lib/ftl/utils/ftl_md.c
00:08:28.665  Processing file lib/ftl/utils/ftl_bitmap.c
00:08:28.665  Processing file lib/ftl/utils/ftl_mempool.c
00:08:28.665  Processing file lib/ftl/utils/ftl_addr_utils.h
00:08:28.665  Processing file lib/ftl/utils/ftl_property.c
00:08:28.665  Processing file lib/ftl/utils/ftl_df.h
00:08:28.665  Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c
00:08:28.665  Processing file lib/fuse_dispatcher/fuse_dispatcher.c
00:08:28.665  Processing file lib/idxd/idxd_internal.h
00:08:28.665  Processing file lib/idxd/idxd_user.c
00:08:28.665  Processing file lib/idxd/idxd_kernel.c
00:08:28.665  Processing file lib/idxd/idxd.c
00:08:28.923  Processing file lib/init/rpc.c
00:08:28.923  Processing file lib/init/subsystem.c
00:08:28.923  Processing file lib/init/subsystem_rpc.c
00:08:28.923  Processing file lib/init/json_config.c
00:08:28.923  Processing file lib/ioat/ioat_internal.h
00:08:28.923  Processing file lib/ioat/ioat.c
00:08:29.182  Processing file lib/iscsi/iscsi_rpc.c
00:08:29.182  Processing file lib/iscsi/tgt_node.c
00:08:29.182  Processing file lib/iscsi/init_grp.c
00:08:29.182  Processing file lib/iscsi/portal_grp.c
00:08:29.182  Processing file lib/iscsi/iscsi_subsystem.c
00:08:29.182  Processing file lib/iscsi/conn.c
00:08:29.182  Processing file lib/iscsi/task.h
00:08:29.182  Processing file lib/iscsi/iscsi.c
00:08:29.182  Processing file lib/iscsi/param.c
00:08:29.182  Processing file lib/iscsi/task.c
00:08:29.182  Processing file lib/iscsi/iscsi.h
00:08:29.440  Processing file lib/json/json_util.c
00:08:29.440  Processing file lib/json/json_parse.c
00:08:29.440  Processing file lib/json/json_write.c
00:08:29.440  Processing file lib/jsonrpc/jsonrpc_client.c
00:08:29.440  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:08:29.440  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:08:29.440  Processing file lib/jsonrpc/jsonrpc_server.c
00:08:29.440  Processing file lib/keyring/keyring.c
00:08:29.440  Processing file lib/keyring/keyring_rpc.c
00:08:29.700  Processing file lib/log/log_deprecated.c
00:08:29.700  Processing file lib/log/log.c
00:08:29.700  Processing file lib/log/log_flags.c
00:08:29.700  Processing file lib/lvol/lvol.c
00:08:29.700  Processing file lib/nbd/nbd_rpc.c
00:08:29.700  Processing file lib/nbd/nbd.c
00:08:29.959  Processing file lib/notify/notify.c
00:08:29.959  Processing file lib/notify/notify_rpc.c
00:08:30.527  Processing file lib/nvme/nvme_zns.c
00:08:30.527  Processing file lib/nvme/nvme_ns_cmd.c
00:08:30.527  Processing file lib/nvme/nvme_poll_group.c
00:08:30.527  Processing file lib/nvme/nvme_opal.c
00:08:30.527  Processing file lib/nvme/nvme_fabric.c
00:08:30.527  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:08:30.527  Processing file lib/nvme/nvme_auth.c
00:08:30.527  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:08:30.527  Processing file lib/nvme/nvme_tcp.c
00:08:30.527  Processing file lib/nvme/nvme_discovery.c
00:08:30.527  Processing file lib/nvme/nvme_pcie_common.c
00:08:30.527  Processing file lib/nvme/nvme_qpair.c
00:08:30.527  Processing file lib/nvme/nvme_ctrlr.c
00:08:30.527  Processing file lib/nvme/nvme_transport.c
00:08:30.527  Processing file lib/nvme/nvme_cuse.c
00:08:30.527  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:08:30.527  Processing file lib/nvme/nvme_rdma.c
00:08:30.527  Processing file lib/nvme/nvme_internal.h
00:08:30.527  Processing file lib/nvme/nvme_ns.c
00:08:30.527  Processing file lib/nvme/nvme_pcie.c
00:08:30.527  Processing file lib/nvme/nvme_io_msg.c
00:08:30.527  Processing file lib/nvme/nvme_quirks.c
00:08:30.527  Processing file lib/nvme/nvme_pcie_internal.h
00:08:30.527  Processing file lib/nvme/nvme.c
00:08:31.095  Processing file lib/nvmf/rdma.c
00:08:31.095  Processing file lib/nvmf/tcp.c
00:08:31.095  Processing file lib/nvmf/ctrlr.c
00:08:31.095  Processing file lib/nvmf/auth.c
00:08:31.095  Processing file lib/nvmf/ctrlr_bdev.c
00:08:31.095  Processing file lib/nvmf/subsystem.c
00:08:31.095  Processing file lib/nvmf/nvmf_rpc.c
00:08:31.095  Processing file lib/nvmf/ctrlr_discovery.c
00:08:31.095  Processing file lib/nvmf/nvmf.c
00:08:31.095  Processing file lib/nvmf/stubs.c
00:08:31.095  Processing file lib/nvmf/nvmf_internal.h
00:08:31.095  Processing file lib/nvmf/transport.c
00:08:31.095  Processing file lib/rdma_provider/common.c
00:08:31.095  Processing file lib/rdma_provider/rdma_provider_verbs.c
00:08:31.354  Processing file lib/rdma_utils/rdma_utils.c
00:08:31.354  Processing file lib/rpc/rpc.c
00:08:31.613  Processing file lib/scsi/task.c
00:08:31.613  Processing file lib/scsi/scsi_bdev.c
00:08:31.613  Processing file lib/scsi/lun.c
00:08:31.613  Processing file lib/scsi/scsi_rpc.c
00:08:31.613  Processing file lib/scsi/dev.c
00:08:31.613  Processing file lib/scsi/port.c
00:08:31.613  Processing file lib/scsi/scsi.c
00:08:31.613  Processing file lib/scsi/scsi_pr.c
00:08:31.613  Processing file lib/sock/sock_rpc.c
00:08:31.613  Processing file lib/sock/sock.c
00:08:31.613  Processing file lib/thread/thread.c
00:08:31.613  Processing file lib/thread/iobuf.c
00:08:31.872  Processing file lib/trace/trace.c
00:08:31.872  Processing file lib/trace/trace_flags.c
00:08:31.872  Processing file lib/trace/trace_rpc.c
00:08:31.872  Processing file lib/trace_parser/trace.cpp
00:08:31.872  Processing file lib/ublk/ublk.c
00:08:31.872  Processing file lib/ublk/ublk_rpc.c
00:08:31.872  Processing file lib/ut/ut.c
00:08:32.131  Processing file lib/ut_mock/mock.c
00:08:32.391  Processing file lib/util/crc32c.c
00:08:32.391  Processing file lib/util/dif.c
00:08:32.391  Processing file lib/util/crc64.c
00:08:32.391  Processing file lib/util/uuid.c
00:08:32.391  Processing file lib/util/hexlify.c
00:08:32.391  Processing file lib/util/md5.c
00:08:32.391  Processing file lib/util/xor.c
00:08:32.391  Processing file lib/util/zipf.c
00:08:32.391  Processing file lib/util/fd_group.c
00:08:32.391  Processing file lib/util/crc32.c
00:08:32.391  Processing file lib/util/cpuset.c
00:08:32.391  Processing file lib/util/string.c
00:08:32.391  Processing file lib/util/crc16.c
00:08:32.391  Processing file lib/util/iov.c
00:08:32.391  Processing file lib/util/strerror_tls.c
00:08:32.391  Processing file lib/util/file.c
00:08:32.391  Processing file lib/util/fd.c
00:08:32.391  Processing file lib/util/base64.c
00:08:32.391  Processing file lib/util/crc32_ieee.c
00:08:32.391  Processing file lib/util/net.c
00:08:32.391  Processing file lib/util/math.c
00:08:32.391  Processing file lib/util/pipe.c
00:08:32.391  Processing file lib/util/bit_array.c
00:08:32.391  Processing file lib/vfio_user/host/vfio_user.c
00:08:32.391  Processing file lib/vfio_user/host/vfio_user_pci.c
00:08:32.651  Processing file lib/vhost/vhost_blk.c
00:08:32.651  Processing file lib/vhost/vhost_scsi.c
00:08:32.651  Processing file lib/vhost/rte_vhost_user.c
00:08:32.651  Processing file lib/vhost/vhost_rpc.c
00:08:32.651  Processing file lib/vhost/vhost_internal.h
00:08:32.651  Processing file lib/vhost/vhost.c
00:08:32.910  Processing file lib/virtio/virtio_vhost_user.c
00:08:32.910  Processing file lib/virtio/virtio_vfio_user.c
00:08:32.910  Processing file lib/virtio/virtio.c
00:08:32.910  Processing file lib/virtio/virtio_pci.c
00:08:32.910  Processing file lib/vmd/vmd.c
00:08:32.910  Processing file lib/vmd/led.c
00:08:32.910  Processing file module/accel/dsa/accel_dsa_rpc.c
00:08:32.910  Processing file module/accel/dsa/accel_dsa.c
00:08:32.910  Processing file module/accel/error/accel_error_rpc.c
00:08:32.910  Processing file module/accel/error/accel_error.c
00:08:33.168  Processing file module/accel/iaa/accel_iaa.c
00:08:33.168  Processing file module/accel/iaa/accel_iaa_rpc.c
00:08:33.168  Processing file module/accel/ioat/accel_ioat.c
00:08:33.168  Processing file module/accel/ioat/accel_ioat_rpc.c
00:08:33.168  Processing file module/bdev/aio/bdev_aio_rpc.c
00:08:33.168  Processing file module/bdev/aio/bdev_aio.c
00:08:33.427  Processing file module/bdev/delay/vbdev_delay.c
00:08:33.427  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:08:33.427  Processing file module/bdev/error/vbdev_error.c
00:08:33.427  Processing file module/bdev/error/vbdev_error_rpc.c
00:08:33.427  Processing file module/bdev/ftl/bdev_ftl_rpc.c
00:08:33.427  Processing file module/bdev/ftl/bdev_ftl.c
00:08:33.686  Processing file module/bdev/gpt/vbdev_gpt.c
00:08:33.686  Processing file module/bdev/gpt/gpt.c
00:08:33.686  Processing file module/bdev/gpt/gpt.h
00:08:33.686  Processing file module/bdev/iscsi/bdev_iscsi_rpc.c
00:08:33.686  Processing file module/bdev/iscsi/bdev_iscsi.c
00:08:33.686  Processing file module/bdev/lvol/vbdev_lvol.c
00:08:33.686  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:08:33.945  Processing file module/bdev/malloc/bdev_malloc.c
00:08:33.945  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:08:33.945  Processing file module/bdev/null/bdev_null_rpc.c
00:08:33.945  Processing file module/bdev/null/bdev_null.c
00:08:34.203  Processing file module/bdev/nvme/vbdev_opal_rpc.c
00:08:34.203  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:08:34.203  Processing file module/bdev/nvme/bdev_mdns_client.c
00:08:34.204  Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c
00:08:34.204  Processing file module/bdev/nvme/nvme_rpc.c
00:08:34.204  Processing file module/bdev/nvme/vbdev_opal.c
00:08:34.204  Processing file module/bdev/nvme/bdev_nvme.c
00:08:34.204  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:08:34.204  Processing file module/bdev/passthru/vbdev_passthru.c
00:08:34.462  Processing file module/bdev/raid/bdev_raid_sb.c
00:08:34.462  Processing file module/bdev/raid/concat.c
00:08:34.462  Processing file module/bdev/raid/bdev_raid.h
00:08:34.462  Processing file module/bdev/raid/raid0.c
00:08:34.462  Processing file module/bdev/raid/raid1.c
00:08:34.462  Processing file module/bdev/raid/bdev_raid.c
00:08:34.462  Processing file module/bdev/raid/bdev_raid_rpc.c
00:08:34.462  Processing file module/bdev/split/vbdev_split.c
00:08:34.462  Processing file module/bdev/split/vbdev_split_rpc.c
00:08:34.721  Processing file module/bdev/virtio/bdev_virtio_scsi.c
00:08:34.721  Processing file module/bdev/virtio/bdev_virtio_blk.c
00:08:34.721  Processing file module/bdev/virtio/bdev_virtio_rpc.c
00:08:34.721  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:08:34.721  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:08:34.721  Processing file module/blob/bdev/blob_bdev.c
00:08:34.981  Processing file module/blobfs/bdev/blobfs_bdev.c
00:08:34.981  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:08:34.981  Processing file module/env_dpdk/env_dpdk_rpc.c
00:08:34.981  Processing file module/event/subsystems/accel/accel.c
00:08:34.981  Processing file module/event/subsystems/bdev/bdev.c
00:08:34.981  Processing file module/event/subsystems/fsdev/fsdev.c
00:08:35.239  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:08:35.239  Processing file module/event/subsystems/iobuf/iobuf.c
00:08:35.239  Processing file module/event/subsystems/iscsi/iscsi.c
00:08:35.239  Processing file module/event/subsystems/keyring/keyring.c
00:08:35.239  Processing file module/event/subsystems/nbd/nbd.c
00:08:35.240  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:08:35.240  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:08:35.499  Processing file module/event/subsystems/scheduler/scheduler.c
00:08:35.499  Processing file module/event/subsystems/scsi/scsi.c
00:08:35.499  Processing file module/event/subsystems/sock/sock.c
00:08:35.499  Processing file module/event/subsystems/ublk/ublk.c
00:08:35.499  Processing file module/event/subsystems/vhost_blk/vhost_blk.c
00:08:35.499  Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c
00:08:35.757  Processing file module/event/subsystems/vmd/vmd.c
00:08:35.757  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:08:35.757  Processing file module/fsdev/aio/fsdev_aio.c
00:08:35.757  Processing file module/fsdev/aio/fsdev_aio_rpc.c
00:08:35.757  Processing file module/fsdev/aio/linux_aio_mgr.c
00:08:35.757  Processing file module/keyring/file/keyring_rpc.c
00:08:35.757  Processing file module/keyring/file/keyring.c
00:08:36.016  Processing file module/keyring/linux/keyring_rpc.c
00:08:36.016  Processing file module/keyring/linux/keyring.c
00:08:36.016  Processing file module/scheduler/dpdk_governor/dpdk_governor.c
00:08:36.016  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:08:36.016  Processing file module/scheduler/gscheduler/gscheduler.c
00:08:36.016  Processing file module/sock/posix/posix.c
00:08:36.016  Writing directory view page.
00:08:36.016  Overall coverage rate:
00:08:36.016    lines......: 37.2% (42161 of 113289 lines)
00:08:36.016    functions..: 40.9% (3904 of 9549 functions)
00:08:36.274  Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:36.274   05:53:57 unittest -- unit/unittest.sh@284 -- # echo 'Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage'
00:08:36.274  
00:08:36.274  
00:08:36.274  =====================
00:08:36.274  All unit tests passed
00:08:36.274  =====================
00:08:36.274  
00:08:36.274  
00:08:36.274   05:53:57 unittest -- unit/unittest.sh@287 -- # set +x
00:08:36.274  
00:08:36.274  real	2m28.880s
00:08:36.274  user	2m5.317s
00:08:36.274  sys	0m13.697s
00:08:36.274   05:53:57 unittest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:36.274   05:53:57 unittest -- common/autotest_common.sh@10 -- # set +x
00:08:36.274  ************************************
00:08:36.274  END TEST unittest
00:08:36.274  ************************************
00:08:36.275   05:53:57  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:08:36.275   05:53:57  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:36.275   05:53:57  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:36.275   05:53:57  -- spdk/autotest.sh@149 -- # timing_enter lib
00:08:36.275   05:53:57  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:36.275   05:53:57  -- common/autotest_common.sh@10 -- # set +x
00:08:36.275   05:53:57  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:08:36.275   05:53:57  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:36.275   05:53:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:36.275   05:53:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.275   05:53:57  -- common/autotest_common.sh@10 -- # set +x
00:08:36.275  ************************************
00:08:36.275  START TEST env
00:08:36.275  ************************************
00:08:36.275   05:53:57 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:36.275  * Looking for test storage...
00:08:36.275  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:36.275     05:53:57 env -- common/autotest_common.sh@1693 -- # lcov --version
00:08:36.275     05:53:57 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:36.275    05:53:57 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:36.275    05:53:57 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:36.275    05:53:57 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:36.275    05:53:57 env -- scripts/common.sh@336 -- # IFS=.-:
00:08:36.275    05:53:57 env -- scripts/common.sh@336 -- # read -ra ver1
00:08:36.275    05:53:57 env -- scripts/common.sh@337 -- # IFS=.-:
00:08:36.275    05:53:57 env -- scripts/common.sh@337 -- # read -ra ver2
00:08:36.275    05:53:57 env -- scripts/common.sh@338 -- # local 'op=<'
00:08:36.275    05:53:57 env -- scripts/common.sh@340 -- # ver1_l=2
00:08:36.275    05:53:57 env -- scripts/common.sh@341 -- # ver2_l=1
00:08:36.275    05:53:57 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:36.275    05:53:57 env -- scripts/common.sh@344 -- # case "$op" in
00:08:36.275    05:53:57 env -- scripts/common.sh@345 -- # : 1
00:08:36.275    05:53:57 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:36.275    05:53:57 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:36.275     05:53:57 env -- scripts/common.sh@365 -- # decimal 1
00:08:36.275     05:53:57 env -- scripts/common.sh@353 -- # local d=1
00:08:36.275     05:53:57 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:36.275     05:53:57 env -- scripts/common.sh@355 -- # echo 1
00:08:36.275    05:53:57 env -- scripts/common.sh@365 -- # ver1[v]=1
00:08:36.275     05:53:57 env -- scripts/common.sh@366 -- # decimal 2
00:08:36.275     05:53:57 env -- scripts/common.sh@353 -- # local d=2
00:08:36.275     05:53:57 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:36.275     05:53:57 env -- scripts/common.sh@355 -- # echo 2
00:08:36.275    05:53:57 env -- scripts/common.sh@366 -- # ver2[v]=2
00:08:36.275    05:53:57 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:36.275    05:53:57 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:36.275    05:53:57 env -- scripts/common.sh@368 -- # return 0
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:36.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:36.275  		--rc genhtml_branch_coverage=1
00:08:36.275  		--rc genhtml_function_coverage=1
00:08:36.275  		--rc genhtml_legend=1
00:08:36.275  		--rc geninfo_all_blocks=1
00:08:36.275  		--rc geninfo_unexecuted_blocks=1
00:08:36.275  		
00:08:36.275  		'
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:36.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:36.275  		--rc genhtml_branch_coverage=1
00:08:36.275  		--rc genhtml_function_coverage=1
00:08:36.275  		--rc genhtml_legend=1
00:08:36.275  		--rc geninfo_all_blocks=1
00:08:36.275  		--rc geninfo_unexecuted_blocks=1
00:08:36.275  		
00:08:36.275  		'
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:36.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:36.275  		--rc genhtml_branch_coverage=1
00:08:36.275  		--rc genhtml_function_coverage=1
00:08:36.275  		--rc genhtml_legend=1
00:08:36.275  		--rc geninfo_all_blocks=1
00:08:36.275  		--rc geninfo_unexecuted_blocks=1
00:08:36.275  		
00:08:36.275  		'
00:08:36.275    05:53:57 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:36.275  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:36.275  		--rc genhtml_branch_coverage=1
00:08:36.275  		--rc genhtml_function_coverage=1
00:08:36.275  		--rc genhtml_legend=1
00:08:36.275  		--rc geninfo_all_blocks=1
00:08:36.275  		--rc geninfo_unexecuted_blocks=1
00:08:36.275  		
00:08:36.275  		'
00:08:36.275   05:53:57 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:36.275   05:53:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:36.275   05:53:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.275   05:53:57 env -- common/autotest_common.sh@10 -- # set +x
00:08:36.534  ************************************
00:08:36.534  START TEST env_memory
00:08:36.534  ************************************
00:08:36.534   05:53:57 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:36.534  
00:08:36.534  
00:08:36.534       CUnit - A unit testing framework for C - Version 2.1-3
00:08:36.534       http://cunit.sourceforge.net/
00:08:36.534  
00:08:36.534  
00:08:36.534  Suite: memory
00:08:36.534    Test: alloc and free memory map ...[2024-11-18 05:53:57.329581] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:36.534  passed
00:08:36.534    Test: mem map translation ...[2024-11-18 05:53:57.392771] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:36.534  [2024-11-18 05:53:57.392855] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:36.534  [2024-11-18 05:53:57.392980] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:36.534  [2024-11-18 05:53:57.393018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:36.534  passed
00:08:36.534    Test: mem map registration ...[2024-11-18 05:53:57.493124] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:08:36.534  [2024-11-18 05:53:57.493236] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:08:36.792  passed
00:08:36.792    Test: mem map adjacent registrations ...passed
00:08:36.792  
00:08:36.792  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:36.792                suites      1      1    n/a      0        0
00:08:36.792                 tests      4      4      4      0        0
00:08:36.792               asserts    152    152    152      0      n/a
00:08:36.792  
00:08:36.792  Elapsed time =    0.353 seconds
00:08:36.792  
00:08:36.792  real	0m0.382s
00:08:36.792  user	0m0.363s
00:08:36.792  sys	0m0.020s
00:08:36.792   05:53:57 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:36.792   05:53:57 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:08:36.792  ************************************
00:08:36.792  END TEST env_memory
00:08:36.792  ************************************
00:08:36.792   05:53:57 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:36.792   05:53:57 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:36.792   05:53:57 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:36.792   05:53:57 env -- common/autotest_common.sh@10 -- # set +x
00:08:36.792  ************************************
00:08:36.792  START TEST env_vtophys
00:08:36.792  ************************************
00:08:36.792   05:53:57 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:36.792  EAL: lib.eal log level changed from notice to debug
00:08:36.792  EAL: Detected lcore 0 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 1 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 2 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 3 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 4 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 5 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 6 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 7 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 8 as core 0 on socket 0
00:08:36.792  EAL: Detected lcore 9 as core 0 on socket 0
00:08:36.792  EAL: Maximum logical cores by configuration: 128
00:08:36.792  EAL: Detected CPU lcores: 10
00:08:36.792  EAL: Detected NUMA nodes: 1
00:08:36.792  EAL: Checking presence of .so 'librte_eal.so.23.0'
00:08:36.792  EAL: Checking presence of .so 'librte_eal.so.23'
00:08:36.792  EAL: Checking presence of .so 'librte_eal.so'
00:08:36.792  EAL: Detected static linkage of DPDK
00:08:36.792  EAL: No shared files mode enabled, IPC will be disabled
00:08:36.792  EAL: Selected IOVA mode 'PA'
00:08:36.792  EAL: Probing VFIO support...
00:08:36.792  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:08:36.792  EAL: VFIO modules not loaded, skipping VFIO support...
00:08:36.793  EAL: Ask a virtual area of 0x2e000 bytes
00:08:36.793  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:36.793  EAL: Setting up physically contiguous memory...
00:08:36.793  EAL: Setting maximum number of open files to 1048576
00:08:36.793  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:36.793  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:36.793  EAL: Ask a virtual area of 0x61000 bytes
00:08:36.793  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:36.793  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:36.793  EAL: Ask a virtual area of 0x400000000 bytes
00:08:36.793  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:36.793  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:36.793  EAL: Ask a virtual area of 0x61000 bytes
00:08:36.793  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:36.793  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:36.793  EAL: Ask a virtual area of 0x400000000 bytes
00:08:36.793  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:36.793  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:36.793  EAL: Ask a virtual area of 0x61000 bytes
00:08:36.793  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:36.793  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:36.793  EAL: Ask a virtual area of 0x400000000 bytes
00:08:36.793  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:36.793  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:36.793  EAL: Ask a virtual area of 0x61000 bytes
00:08:36.793  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:36.793  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:36.793  EAL: Ask a virtual area of 0x400000000 bytes
00:08:36.793  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:36.793  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:36.793  EAL: Hugepages will be freed exactly as allocated.
00:08:36.793  EAL: No shared files mode enabled, IPC is disabled
00:08:36.793  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: TSC frequency is ~2200000 KHz
00:08:37.052  EAL: Main lcore 0 is ready (tid=7ed0176dca80;cpuset=[0])
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 0
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 2MB
00:08:37.052  EAL: Module /sys/module/vfio not found! error 2 (No such file or directory)
00:08:37.052  EAL: Mem event callback 'spdk:(nil)' registered
00:08:37.052  EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory)
00:08:37.052  
00:08:37.052  
00:08:37.052       CUnit - A unit testing framework for C - Version 2.1-3
00:08:37.052       http://cunit.sourceforge.net/
00:08:37.052  
00:08:37.052  
00:08:37.052  Suite: components_suite
00:08:37.052    Test: vtophys_malloc_test ...passed
00:08:37.052    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 4MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 4MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 6MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 6MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 10MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 10MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 18MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 18MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 34MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 34MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 66MB
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was shrunk by 66MB
00:08:37.052  EAL: Trying to obtain current memory policy.
00:08:37.052  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.052  EAL: Restoring previous memory policy: 4
00:08:37.052  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.052  EAL: request: mp_malloc_sync
00:08:37.052  EAL: No shared files mode enabled, IPC is disabled
00:08:37.052  EAL: Heap on socket 0 was expanded by 130MB
00:08:37.312  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.312  EAL: request: mp_malloc_sync
00:08:37.312  EAL: No shared files mode enabled, IPC is disabled
00:08:37.312  EAL: Heap on socket 0 was shrunk by 130MB
00:08:37.312  EAL: Trying to obtain current memory policy.
00:08:37.312  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.312  EAL: Restoring previous memory policy: 4
00:08:37.312  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.312  EAL: request: mp_malloc_sync
00:08:37.312  EAL: No shared files mode enabled, IPC is disabled
00:08:37.312  EAL: Heap on socket 0 was expanded by 258MB
00:08:37.312  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.312  EAL: request: mp_malloc_sync
00:08:37.312  EAL: No shared files mode enabled, IPC is disabled
00:08:37.312  EAL: Heap on socket 0 was shrunk by 258MB
00:08:37.312  EAL: Trying to obtain current memory policy.
00:08:37.312  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.312  EAL: Restoring previous memory policy: 4
00:08:37.312  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.312  EAL: request: mp_malloc_sync
00:08:37.312  EAL: No shared files mode enabled, IPC is disabled
00:08:37.312  EAL: Heap on socket 0 was expanded by 514MB
00:08:37.571  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.571  EAL: request: mp_malloc_sync
00:08:37.571  EAL: No shared files mode enabled, IPC is disabled
00:08:37.571  EAL: Heap on socket 0 was shrunk by 514MB
00:08:37.571  EAL: Trying to obtain current memory policy.
00:08:37.571  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:37.571  EAL: Restoring previous memory policy: 4
00:08:37.571  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.571  EAL: request: mp_malloc_sync
00:08:37.571  EAL: No shared files mode enabled, IPC is disabled
00:08:37.571  EAL: Heap on socket 0 was expanded by 1026MB
00:08:37.876  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.876  passed
00:08:37.876  
00:08:37.876  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:37.876                suites      1      1    n/a      0        0
00:08:37.876                 tests      2      2      2      0        0
00:08:37.876               asserts   5407   5407   5407      0      n/a
00:08:37.876  
00:08:37.876  Elapsed time =    0.830 secondsEAL: request: mp_malloc_sync
00:08:37.876  EAL: No shared files mode enabled, IPC is disabled
00:08:37.876  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:37.876  
00:08:37.876  EAL: Calling mem event callback 'spdk:(nil)'
00:08:37.876  EAL: request: mp_malloc_sync
00:08:37.876  EAL: No shared files mode enabled, IPC is disabled
00:08:37.876  EAL: Heap on socket 0 was shrunk by 2MB
00:08:37.876  EAL: No shared files mode enabled, IPC is disabled
00:08:37.876  EAL: No shared files mode enabled, IPC is disabled
00:08:37.876  EAL: No shared files mode enabled, IPC is disabled
00:08:37.876  
00:08:37.876  real	0m1.081s
00:08:37.876  user	0m0.550s
00:08:37.876  sys	0m0.405s
00:08:37.876   05:53:58 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:37.876   05:53:58 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:08:37.876  ************************************
00:08:37.876  END TEST env_vtophys
00:08:37.876  ************************************
00:08:38.163   05:53:58 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:38.163   05:53:58 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:38.163   05:53:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.163   05:53:58 env -- common/autotest_common.sh@10 -- # set +x
00:08:38.163  ************************************
00:08:38.163  START TEST env_pci
00:08:38.163  ************************************
00:08:38.163   05:53:58 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:38.163  
00:08:38.163  
00:08:38.163       CUnit - A unit testing framework for C - Version 2.1-3
00:08:38.163       http://cunit.sourceforge.net/
00:08:38.163  
00:08:38.163  
00:08:38.163  Suite: pci
00:08:38.163    Test: pci_hook ...[2024-11-18 05:53:58.856661] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 77653 has claimed it
00:08:38.163  passed
00:08:38.163  
00:08:38.163  EAL: Cannot find device (10000:00:01.0)
00:08:38.163  EAL: Failed to attach device on primary process
00:08:38.163  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:38.163                suites      1      1    n/a      0        0
00:08:38.163                 tests      1      1      1      0        0
00:08:38.163               asserts     25     25     25      0      n/a
00:08:38.163  
00:08:38.163  Elapsed time =    0.006 seconds
00:08:38.163  
00:08:38.163  real	0m0.065s
00:08:38.163  user	0m0.036s
00:08:38.163  sys	0m0.029s
00:08:38.163   05:53:58 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.163   05:53:58 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:08:38.163  ************************************
00:08:38.163  END TEST env_pci
00:08:38.163  ************************************
00:08:38.163   05:53:58 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:08:38.163    05:53:58 env -- env/env.sh@15 -- # uname
00:08:38.163   05:53:58 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:38.163   05:53:58 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:38.163   05:53:58 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:38.163   05:53:58 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:38.163   05:53:58 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.163   05:53:58 env -- common/autotest_common.sh@10 -- # set +x
00:08:38.163  ************************************
00:08:38.163  START TEST env_dpdk_post_init
00:08:38.163  ************************************
00:08:38.163   05:53:58 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:38.163  EAL: Detected CPU lcores: 10
00:08:38.163  EAL: Detected NUMA nodes: 1
00:08:38.163  EAL: Detected static linkage of DPDK
00:08:38.163  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:38.163  EAL: Selected IOVA mode 'PA'
00:08:38.163  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:38.422  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:08:38.422  Starting DPDK initialization...
00:08:38.422  Starting SPDK post initialization...
00:08:38.422  SPDK NVMe probe
00:08:38.422  Attaching to 0000:00:10.0
00:08:38.422  Attached to 0000:00:10.0
00:08:38.422  Cleaning up...
00:08:38.422  
00:08:38.422  real	0m0.235s
00:08:38.422  user	0m0.073s
00:08:38.422  sys	0m0.064s
00:08:38.422   05:53:59 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.422   05:53:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:08:38.422  ************************************
00:08:38.422  END TEST env_dpdk_post_init
00:08:38.422  ************************************
00:08:38.422    05:53:59 env -- env/env.sh@26 -- # uname
00:08:38.422   05:53:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:38.422   05:53:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:38.422   05:53:59 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:38.422   05:53:59 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.422   05:53:59 env -- common/autotest_common.sh@10 -- # set +x
00:08:38.422  ************************************
00:08:38.422  START TEST env_mem_callbacks
00:08:38.422  ************************************
00:08:38.422   05:53:59 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:38.422  EAL: Detected CPU lcores: 10
00:08:38.422  EAL: Detected NUMA nodes: 1
00:08:38.422  EAL: Detected static linkage of DPDK
00:08:38.422  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:38.422  EAL: Selected IOVA mode 'PA'
00:08:38.422  TELEMETRY: No legacy callbacks, legacy socket not created
00:08:38.422  
00:08:38.422  
00:08:38.422       CUnit - A unit testing framework for C - Version 2.1-3
00:08:38.422       http://cunit.sourceforge.net/
00:08:38.422  
00:08:38.422  
00:08:38.422  Suite: memory
00:08:38.422    Test: test ...
00:08:38.422  register 0x200000200000 2097152
00:08:38.422  malloc 3145728
00:08:38.422  register 0x200000400000 4194304
00:08:38.422  buf 0x200000500000 len 3145728 PASSED
00:08:38.422  malloc 64
00:08:38.422  buf 0x2000004fff40 len 64 PASSED
00:08:38.422  malloc 4194304
00:08:38.422  register 0x200000800000 6291456
00:08:38.422  buf 0x200000a00000 len 4194304 PASSED
00:08:38.422  free 0x200000500000 3145728
00:08:38.681  free 0x2000004fff40 64
00:08:38.681  unregister 0x200000400000 4194304 PASSED
00:08:38.681  free 0x200000a00000 4194304
00:08:38.681  unregister 0x200000800000 6291456 PASSED
00:08:38.681  malloc 8388608
00:08:38.681  register 0x200000400000 10485760
00:08:38.681  buf 0x200000600000 len 8388608 PASSED
00:08:38.681  free 0x200000600000 8388608
00:08:38.681  unregister 0x200000400000 10485760 PASSED
00:08:38.681  passed
00:08:38.681  
00:08:38.681  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:38.681                suites      1      1    n/a      0        0
00:08:38.681                 tests      1      1      1      0        0
00:08:38.681               asserts     15     15     15      0      n/a
00:08:38.681  
00:08:38.681  Elapsed time =    0.011 seconds
00:08:38.681  
00:08:38.681  real	0m0.174s
00:08:38.681  user	0m0.029s
00:08:38.681  sys	0m0.042s
00:08:38.681   05:53:59 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.681  ************************************
00:08:38.681  END TEST env_mem_callbacks
00:08:38.681  ************************************
00:08:38.681   05:53:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:08:38.681  ************************************
00:08:38.681  END TEST env
00:08:38.681  ************************************
00:08:38.681  
00:08:38.681  real	0m2.397s
00:08:38.681  user	0m1.261s
00:08:38.681  sys	0m0.805s
00:08:38.681   05:53:59 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:38.681   05:53:59 env -- common/autotest_common.sh@10 -- # set +x
00:08:38.681   05:53:59  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:38.681   05:53:59  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:38.681   05:53:59  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:38.681   05:53:59  -- common/autotest_common.sh@10 -- # set +x
00:08:38.681  ************************************
00:08:38.681  START TEST rpc
00:08:38.681  ************************************
00:08:38.681   05:53:59 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:38.681  * Looking for test storage...
00:08:38.681  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:38.681    05:53:59 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:38.681     05:53:59 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:38.681     05:53:59 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:38.940    05:53:59 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:38.940    05:53:59 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:38.940    05:53:59 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:38.940    05:53:59 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:38.940    05:53:59 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:38.940    05:53:59 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:38.940    05:53:59 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:38.940    05:53:59 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:38.940    05:53:59 rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:38.940    05:53:59 rpc -- scripts/common.sh@345 -- # : 1
00:08:38.940    05:53:59 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:38.940    05:53:59 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:38.940     05:53:59 rpc -- scripts/common.sh@365 -- # decimal 1
00:08:38.940     05:53:59 rpc -- scripts/common.sh@353 -- # local d=1
00:08:38.940     05:53:59 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:38.940     05:53:59 rpc -- scripts/common.sh@355 -- # echo 1
00:08:38.940    05:53:59 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:38.940     05:53:59 rpc -- scripts/common.sh@366 -- # decimal 2
00:08:38.940     05:53:59 rpc -- scripts/common.sh@353 -- # local d=2
00:08:38.940     05:53:59 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:38.940     05:53:59 rpc -- scripts/common.sh@355 -- # echo 2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:38.940    05:53:59 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:38.940    05:53:59 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:38.940    05:53:59 rpc -- scripts/common.sh@368 -- # return 0
00:08:38.940  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:38.940    05:53:59 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:38.940    05:53:59 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:38.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.940  		--rc genhtml_branch_coverage=1
00:08:38.940  		--rc genhtml_function_coverage=1
00:08:38.940  		--rc genhtml_legend=1
00:08:38.940  		--rc geninfo_all_blocks=1
00:08:38.940  		--rc geninfo_unexecuted_blocks=1
00:08:38.940  		
00:08:38.940  		'
00:08:38.940    05:53:59 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:38.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.940  		--rc genhtml_branch_coverage=1
00:08:38.940  		--rc genhtml_function_coverage=1
00:08:38.940  		--rc genhtml_legend=1
00:08:38.940  		--rc geninfo_all_blocks=1
00:08:38.940  		--rc geninfo_unexecuted_blocks=1
00:08:38.940  		
00:08:38.940  		'
00:08:38.940    05:53:59 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:38.940  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.940  		--rc genhtml_branch_coverage=1
00:08:38.940  		--rc genhtml_function_coverage=1
00:08:38.940  		--rc genhtml_legend=1
00:08:38.941  		--rc geninfo_all_blocks=1
00:08:38.941  		--rc geninfo_unexecuted_blocks=1
00:08:38.941  		
00:08:38.941  		'
00:08:38.941    05:53:59 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:38.941  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:38.941  		--rc genhtml_branch_coverage=1
00:08:38.941  		--rc genhtml_function_coverage=1
00:08:38.941  		--rc genhtml_legend=1
00:08:38.941  		--rc geninfo_all_blocks=1
00:08:38.941  		--rc geninfo_unexecuted_blocks=1
00:08:38.941  		
00:08:38.941  		'
00:08:38.941   05:53:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=77780
00:08:38.941   05:53:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:38.941   05:53:59 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:08:38.941   05:53:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 77780
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@835 -- # '[' -z 77780 ']'
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:38.941   05:53:59 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:38.941  [2024-11-18 05:53:59.816682] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:38.941  [2024-11-18 05:53:59.817359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77780 ]
00:08:39.199  [2024-11-18 05:53:59.978407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:39.199  [2024-11-18 05:54:00.005280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:39.199  [2024-11-18 05:54:00.005608] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 77780' to capture a snapshot of events at runtime.
00:08:39.199  [2024-11-18 05:54:00.005907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:39.199  [2024-11-18 05:54:00.006069] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:39.199  [2024-11-18 05:54:00.006190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid77780 for offline analysis/debug.
00:08:39.199  [2024-11-18 05:54:00.006888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:40.135   05:54:00 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:40.135   05:54:00 rpc -- common/autotest_common.sh@868 -- # return 0
00:08:40.135   05:54:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:40.136   05:54:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:40.136   05:54:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:40.136   05:54:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:40.136   05:54:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:40.136   05:54:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:40.136   05:54:00 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:40.136  ************************************
00:08:40.136  START TEST rpc_integrity
00:08:40.136  ************************************
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:40.136  {
00:08:40.136  "name": "Malloc0",
00:08:40.136  "aliases": [
00:08:40.136  "d204cc07-7adc-46a4-8c99-ba4ea168ecda"
00:08:40.136  ],
00:08:40.136  "product_name": "Malloc disk",
00:08:40.136  "block_size": 512,
00:08:40.136  "num_blocks": 16384,
00:08:40.136  "uuid": "d204cc07-7adc-46a4-8c99-ba4ea168ecda",
00:08:40.136  "assigned_rate_limits": {
00:08:40.136  "rw_ios_per_sec": 0,
00:08:40.136  "rw_mbytes_per_sec": 0,
00:08:40.136  "r_mbytes_per_sec": 0,
00:08:40.136  "w_mbytes_per_sec": 0
00:08:40.136  },
00:08:40.136  "claimed": false,
00:08:40.136  "zoned": false,
00:08:40.136  "supported_io_types": {
00:08:40.136  "read": true,
00:08:40.136  "write": true,
00:08:40.136  "unmap": true,
00:08:40.136  "flush": true,
00:08:40.136  "reset": true,
00:08:40.136  "nvme_admin": false,
00:08:40.136  "nvme_io": false,
00:08:40.136  "nvme_io_md": false,
00:08:40.136  "write_zeroes": true,
00:08:40.136  "zcopy": true,
00:08:40.136  "get_zone_info": false,
00:08:40.136  "zone_management": false,
00:08:40.136  "zone_append": false,
00:08:40.136  "compare": false,
00:08:40.136  "compare_and_write": false,
00:08:40.136  "abort": true,
00:08:40.136  "seek_hole": false,
00:08:40.136  "seek_data": false,
00:08:40.136  "copy": true,
00:08:40.136  "nvme_iov_md": false
00:08:40.136  },
00:08:40.136  "memory_domains": [
00:08:40.136  {
00:08:40.136  "dma_device_id": "system",
00:08:40.136  "dma_device_type": 1
00:08:40.136  },
00:08:40.136  {
00:08:40.136  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.136  "dma_device_type": 2
00:08:40.136  }
00:08:40.136  ],
00:08:40.136  "driver_specific": {}
00:08:40.136  }
00:08:40.136  ]'
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136  [2024-11-18 05:54:00.856068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:40.136  [2024-11-18 05:54:00.856158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:40.136  [2024-11-18 05:54:00.856225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000006080
00:08:40.136  [2024-11-18 05:54:00.856241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:40.136  [2024-11-18 05:54:00.859382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:40.136  [2024-11-18 05:54:00.859475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:40.136  Passthru0
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:40.136  {
00:08:40.136  "name": "Malloc0",
00:08:40.136  "aliases": [
00:08:40.136  "d204cc07-7adc-46a4-8c99-ba4ea168ecda"
00:08:40.136  ],
00:08:40.136  "product_name": "Malloc disk",
00:08:40.136  "block_size": 512,
00:08:40.136  "num_blocks": 16384,
00:08:40.136  "uuid": "d204cc07-7adc-46a4-8c99-ba4ea168ecda",
00:08:40.136  "assigned_rate_limits": {
00:08:40.136  "rw_ios_per_sec": 0,
00:08:40.136  "rw_mbytes_per_sec": 0,
00:08:40.136  "r_mbytes_per_sec": 0,
00:08:40.136  "w_mbytes_per_sec": 0
00:08:40.136  },
00:08:40.136  "claimed": true,
00:08:40.136  "claim_type": "exclusive_write",
00:08:40.136  "zoned": false,
00:08:40.136  "supported_io_types": {
00:08:40.136  "read": true,
00:08:40.136  "write": true,
00:08:40.136  "unmap": true,
00:08:40.136  "flush": true,
00:08:40.136  "reset": true,
00:08:40.136  "nvme_admin": false,
00:08:40.136  "nvme_io": false,
00:08:40.136  "nvme_io_md": false,
00:08:40.136  "write_zeroes": true,
00:08:40.136  "zcopy": true,
00:08:40.136  "get_zone_info": false,
00:08:40.136  "zone_management": false,
00:08:40.136  "zone_append": false,
00:08:40.136  "compare": false,
00:08:40.136  "compare_and_write": false,
00:08:40.136  "abort": true,
00:08:40.136  "seek_hole": false,
00:08:40.136  "seek_data": false,
00:08:40.136  "copy": true,
00:08:40.136  "nvme_iov_md": false
00:08:40.136  },
00:08:40.136  "memory_domains": [
00:08:40.136  {
00:08:40.136  "dma_device_id": "system",
00:08:40.136  "dma_device_type": 1
00:08:40.136  },
00:08:40.136  {
00:08:40.136  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.136  "dma_device_type": 2
00:08:40.136  }
00:08:40.136  ],
00:08:40.136  "driver_specific": {}
00:08:40.136  },
00:08:40.136  {
00:08:40.136  "name": "Passthru0",
00:08:40.136  "aliases": [
00:08:40.136  "a6f2e4c6-0f4f-5dcf-aa0b-da07446ec09c"
00:08:40.136  ],
00:08:40.136  "product_name": "passthru",
00:08:40.136  "block_size": 512,
00:08:40.136  "num_blocks": 16384,
00:08:40.136  "uuid": "a6f2e4c6-0f4f-5dcf-aa0b-da07446ec09c",
00:08:40.136  "assigned_rate_limits": {
00:08:40.136  "rw_ios_per_sec": 0,
00:08:40.136  "rw_mbytes_per_sec": 0,
00:08:40.136  "r_mbytes_per_sec": 0,
00:08:40.136  "w_mbytes_per_sec": 0
00:08:40.136  },
00:08:40.136  "claimed": false,
00:08:40.136  "zoned": false,
00:08:40.136  "supported_io_types": {
00:08:40.136  "read": true,
00:08:40.136  "write": true,
00:08:40.136  "unmap": true,
00:08:40.136  "flush": true,
00:08:40.136  "reset": true,
00:08:40.136  "nvme_admin": false,
00:08:40.136  "nvme_io": false,
00:08:40.136  "nvme_io_md": false,
00:08:40.136  "write_zeroes": true,
00:08:40.136  "zcopy": true,
00:08:40.136  "get_zone_info": false,
00:08:40.136  "zone_management": false,
00:08:40.136  "zone_append": false,
00:08:40.136  "compare": false,
00:08:40.136  "compare_and_write": false,
00:08:40.136  "abort": true,
00:08:40.136  "seek_hole": false,
00:08:40.136  "seek_data": false,
00:08:40.136  "copy": true,
00:08:40.136  "nvme_iov_md": false
00:08:40.136  },
00:08:40.136  "memory_domains": [
00:08:40.136  {
00:08:40.136  "dma_device_id": "system",
00:08:40.136  "dma_device_type": 1
00:08:40.136  },
00:08:40.136  {
00:08:40.136  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.136  "dma_device_type": 2
00:08:40.136  }
00:08:40.136  ],
00:08:40.136  "driver_specific": {
00:08:40.136  "passthru": {
00:08:40.136  "name": "Passthru0",
00:08:40.136  "base_bdev_name": "Malloc0"
00:08:40.136  }
00:08:40.136  }
00:08:40.136  }
00:08:40.136  ]'
00:08:40.136    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.136   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:40.136   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.137   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:40.137    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.137    05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:40.137    05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:40.137   05:54:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:40.137  
00:08:40.137  real	0m0.156s
00:08:40.137  user	0m0.043s
00:08:40.137  sys	0m0.048s
00:08:40.137   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:40.137  ************************************
00:08:40.137  END TEST rpc_integrity
00:08:40.137   05:54:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.137  ************************************
00:08:40.137   05:54:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:40.137   05:54:00 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:40.137   05:54:00 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:40.137   05:54:00 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:40.137  ************************************
00:08:40.137  START TEST rpc_plugins
00:08:40.137  ************************************
00:08:40.137   05:54:00 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:08:40.137    05:54:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:40.137    05:54:00 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137    05:54:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:40.137    05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:08:40.137  {
00:08:40.137  "name": "Malloc1",
00:08:40.137  "aliases": [
00:08:40.137  "16239bc0-f9ac-41f7-bed6-00d4c1b22923"
00:08:40.137  ],
00:08:40.137  "product_name": "Malloc disk",
00:08:40.137  "block_size": 4096,
00:08:40.137  "num_blocks": 256,
00:08:40.137  "uuid": "16239bc0-f9ac-41f7-bed6-00d4c1b22923",
00:08:40.137  "assigned_rate_limits": {
00:08:40.137  "rw_ios_per_sec": 0,
00:08:40.137  "rw_mbytes_per_sec": 0,
00:08:40.137  "r_mbytes_per_sec": 0,
00:08:40.137  "w_mbytes_per_sec": 0
00:08:40.137  },
00:08:40.137  "claimed": false,
00:08:40.137  "zoned": false,
00:08:40.137  "supported_io_types": {
00:08:40.137  "read": true,
00:08:40.137  "write": true,
00:08:40.137  "unmap": true,
00:08:40.137  "flush": true,
00:08:40.137  "reset": true,
00:08:40.137  "nvme_admin": false,
00:08:40.137  "nvme_io": false,
00:08:40.137  "nvme_io_md": false,
00:08:40.137  "write_zeroes": true,
00:08:40.137  "zcopy": true,
00:08:40.137  "get_zone_info": false,
00:08:40.137  "zone_management": false,
00:08:40.137  "zone_append": false,
00:08:40.137  "compare": false,
00:08:40.137  "compare_and_write": false,
00:08:40.137  "abort": true,
00:08:40.137  "seek_hole": false,
00:08:40.137  "seek_data": false,
00:08:40.137  "copy": true,
00:08:40.137  "nvme_iov_md": false
00:08:40.137  },
00:08:40.137  "memory_domains": [
00:08:40.137  {
00:08:40.137  "dma_device_id": "system",
00:08:40.137  "dma_device_type": 1
00:08:40.137  },
00:08:40.137  {
00:08:40.137  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.137  "dma_device_type": 2
00:08:40.137  }
00:08:40.137  ],
00:08:40.137  "driver_specific": {}
00:08:40.137  }
00:08:40.137  ]'
00:08:40.137    05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:40.137   05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137   05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:40.137   05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137    05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:40.137    05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:40.137    05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:08:40.137   05:54:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:40.137  
00:08:40.137  real	0m0.072s
00:08:40.137  user	0m0.025s
00:08:40.137  sys	0m0.019s
00:08:40.137   05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:40.137  ************************************
00:08:40.137  END TEST rpc_plugins
00:08:40.137  ************************************
00:08:40.137   05:54:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:40.137   05:54:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:40.137   05:54:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:40.137   05:54:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:40.137   05:54:01 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:40.397  ************************************
00:08:40.397  START TEST rpc_trace_cmd_test
00:08:40.397  ************************************
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:08:40.397  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid77780",
00:08:40.397  "tpoint_group_mask": "0x8",
00:08:40.397  "iscsi_conn": {
00:08:40.397  "mask": "0x2",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "scsi": {
00:08:40.397  "mask": "0x4",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "bdev": {
00:08:40.397  "mask": "0x8",
00:08:40.397  "tpoint_mask": "0xffffffffffffffff"
00:08:40.397  },
00:08:40.397  "nvmf_rdma": {
00:08:40.397  "mask": "0x10",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "nvmf_tcp": {
00:08:40.397  "mask": "0x20",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "ftl": {
00:08:40.397  "mask": "0x40",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "blobfs": {
00:08:40.397  "mask": "0x80",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "dsa": {
00:08:40.397  "mask": "0x200",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "thread": {
00:08:40.397  "mask": "0x400",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "nvme_pcie": {
00:08:40.397  "mask": "0x800",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "iaa": {
00:08:40.397  "mask": "0x1000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "nvme_tcp": {
00:08:40.397  "mask": "0x2000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "bdev_nvme": {
00:08:40.397  "mask": "0x4000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "sock": {
00:08:40.397  "mask": "0x8000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "blob": {
00:08:40.397  "mask": "0x10000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "bdev_raid": {
00:08:40.397  "mask": "0x20000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  },
00:08:40.397  "scheduler": {
00:08:40.397  "mask": "0x40000",
00:08:40.397  "tpoint_mask": "0x0"
00:08:40.397  }
00:08:40.397  }'
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:40.397    05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:40.397  
00:08:40.397  real	0m0.064s
00:08:40.397  user	0m0.031s
00:08:40.397  sys	0m0.027s
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:40.397  ************************************
00:08:40.397  END TEST rpc_trace_cmd_test
00:08:40.397   05:54:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:40.397  ************************************
00:08:40.397   05:54:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:40.397   05:54:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:40.397   05:54:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:40.397   05:54:01 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:40.397   05:54:01 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:40.397   05:54:01 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:40.397  ************************************
00:08:40.397  START TEST rpc_daemon_integrity
00:08:40.397  ************************************
00:08:40.397   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.397   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:40.397   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.397   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.397    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.397   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:40.397  {
00:08:40.397  "name": "Malloc2",
00:08:40.397  "aliases": [
00:08:40.397  "294694ef-c2d3-43cc-8263-512ba31c0fee"
00:08:40.397  ],
00:08:40.397  "product_name": "Malloc disk",
00:08:40.397  "block_size": 512,
00:08:40.397  "num_blocks": 16384,
00:08:40.397  "uuid": "294694ef-c2d3-43cc-8263-512ba31c0fee",
00:08:40.397  "assigned_rate_limits": {
00:08:40.397  "rw_ios_per_sec": 0,
00:08:40.397  "rw_mbytes_per_sec": 0,
00:08:40.397  "r_mbytes_per_sec": 0,
00:08:40.397  "w_mbytes_per_sec": 0
00:08:40.397  },
00:08:40.397  "claimed": false,
00:08:40.397  "zoned": false,
00:08:40.397  "supported_io_types": {
00:08:40.397  "read": true,
00:08:40.397  "write": true,
00:08:40.397  "unmap": true,
00:08:40.397  "flush": true,
00:08:40.398  "reset": true,
00:08:40.398  "nvme_admin": false,
00:08:40.398  "nvme_io": false,
00:08:40.398  "nvme_io_md": false,
00:08:40.398  "write_zeroes": true,
00:08:40.398  "zcopy": true,
00:08:40.398  "get_zone_info": false,
00:08:40.398  "zone_management": false,
00:08:40.398  "zone_append": false,
00:08:40.398  "compare": false,
00:08:40.398  "compare_and_write": false,
00:08:40.398  "abort": true,
00:08:40.398  "seek_hole": false,
00:08:40.398  "seek_data": false,
00:08:40.398  "copy": true,
00:08:40.398  "nvme_iov_md": false
00:08:40.398  },
00:08:40.398  "memory_domains": [
00:08:40.398  {
00:08:40.398  "dma_device_id": "system",
00:08:40.398  "dma_device_type": 1
00:08:40.398  },
00:08:40.398  {
00:08:40.398  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.398  "dma_device_type": 2
00:08:40.398  }
00:08:40.398  ],
00:08:40.398  "driver_specific": {}
00:08:40.398  }
00:08:40.398  ]'
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.398  [2024-11-18 05:54:01.297055] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:40.398  [2024-11-18 05:54:01.297134] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:40.398  [2024-11-18 05:54:01.297167] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000007280
00:08:40.398  [2024-11-18 05:54:01.297182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:40.398  [2024-11-18 05:54:01.300001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:40.398  [2024-11-18 05:54:01.300048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:40.398  Passthru0
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:40.398  {
00:08:40.398  "name": "Malloc2",
00:08:40.398  "aliases": [
00:08:40.398  "294694ef-c2d3-43cc-8263-512ba31c0fee"
00:08:40.398  ],
00:08:40.398  "product_name": "Malloc disk",
00:08:40.398  "block_size": 512,
00:08:40.398  "num_blocks": 16384,
00:08:40.398  "uuid": "294694ef-c2d3-43cc-8263-512ba31c0fee",
00:08:40.398  "assigned_rate_limits": {
00:08:40.398  "rw_ios_per_sec": 0,
00:08:40.398  "rw_mbytes_per_sec": 0,
00:08:40.398  "r_mbytes_per_sec": 0,
00:08:40.398  "w_mbytes_per_sec": 0
00:08:40.398  },
00:08:40.398  "claimed": true,
00:08:40.398  "claim_type": "exclusive_write",
00:08:40.398  "zoned": false,
00:08:40.398  "supported_io_types": {
00:08:40.398  "read": true,
00:08:40.398  "write": true,
00:08:40.398  "unmap": true,
00:08:40.398  "flush": true,
00:08:40.398  "reset": true,
00:08:40.398  "nvme_admin": false,
00:08:40.398  "nvme_io": false,
00:08:40.398  "nvme_io_md": false,
00:08:40.398  "write_zeroes": true,
00:08:40.398  "zcopy": true,
00:08:40.398  "get_zone_info": false,
00:08:40.398  "zone_management": false,
00:08:40.398  "zone_append": false,
00:08:40.398  "compare": false,
00:08:40.398  "compare_and_write": false,
00:08:40.398  "abort": true,
00:08:40.398  "seek_hole": false,
00:08:40.398  "seek_data": false,
00:08:40.398  "copy": true,
00:08:40.398  "nvme_iov_md": false
00:08:40.398  },
00:08:40.398  "memory_domains": [
00:08:40.398  {
00:08:40.398  "dma_device_id": "system",
00:08:40.398  "dma_device_type": 1
00:08:40.398  },
00:08:40.398  {
00:08:40.398  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.398  "dma_device_type": 2
00:08:40.398  }
00:08:40.398  ],
00:08:40.398  "driver_specific": {}
00:08:40.398  },
00:08:40.398  {
00:08:40.398  "name": "Passthru0",
00:08:40.398  "aliases": [
00:08:40.398  "1a64b989-9a0b-564d-aa87-4bb891498245"
00:08:40.398  ],
00:08:40.398  "product_name": "passthru",
00:08:40.398  "block_size": 512,
00:08:40.398  "num_blocks": 16384,
00:08:40.398  "uuid": "1a64b989-9a0b-564d-aa87-4bb891498245",
00:08:40.398  "assigned_rate_limits": {
00:08:40.398  "rw_ios_per_sec": 0,
00:08:40.398  "rw_mbytes_per_sec": 0,
00:08:40.398  "r_mbytes_per_sec": 0,
00:08:40.398  "w_mbytes_per_sec": 0
00:08:40.398  },
00:08:40.398  "claimed": false,
00:08:40.398  "zoned": false,
00:08:40.398  "supported_io_types": {
00:08:40.398  "read": true,
00:08:40.398  "write": true,
00:08:40.398  "unmap": true,
00:08:40.398  "flush": true,
00:08:40.398  "reset": true,
00:08:40.398  "nvme_admin": false,
00:08:40.398  "nvme_io": false,
00:08:40.398  "nvme_io_md": false,
00:08:40.398  "write_zeroes": true,
00:08:40.398  "zcopy": true,
00:08:40.398  "get_zone_info": false,
00:08:40.398  "zone_management": false,
00:08:40.398  "zone_append": false,
00:08:40.398  "compare": false,
00:08:40.398  "compare_and_write": false,
00:08:40.398  "abort": true,
00:08:40.398  "seek_hole": false,
00:08:40.398  "seek_data": false,
00:08:40.398  "copy": true,
00:08:40.398  "nvme_iov_md": false
00:08:40.398  },
00:08:40.398  "memory_domains": [
00:08:40.398  {
00:08:40.398  "dma_device_id": "system",
00:08:40.398  "dma_device_type": 1
00:08:40.398  },
00:08:40.398  {
00:08:40.398  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:40.398  "dma_device_type": 2
00:08:40.398  }
00:08:40.398  ],
00:08:40.398  "driver_specific": {
00:08:40.398  "passthru": {
00:08:40.398  "name": "Passthru0",
00:08:40.398  "base_bdev_name": "Malloc2"
00:08:40.398  }
00:08:40.398  }
00:08:40.398  }
00:08:40.398  ]'
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:40.398   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:40.398    05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:40.657   05:54:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:40.657  
00:08:40.657  real	0m0.152s
00:08:40.657  user	0m0.051s
00:08:40.657  sys	0m0.044s
00:08:40.657   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:40.657  ************************************
00:08:40.657   05:54:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:40.657  END TEST rpc_daemon_integrity
00:08:40.657  ************************************
00:08:40.657   05:54:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:40.657   05:54:01 rpc -- rpc/rpc.sh@84 -- # killprocess 77780
00:08:40.657   05:54:01 rpc -- common/autotest_common.sh@954 -- # '[' -z 77780 ']'
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@958 -- # kill -0 77780
00:08:40.658    05:54:01 rpc -- common/autotest_common.sh@959 -- # uname
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:40.658    05:54:01 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77780
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:40.658  killing process with pid 77780
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77780'
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@973 -- # kill 77780
00:08:40.658   05:54:01 rpc -- common/autotest_common.sh@978 -- # wait 77780
00:08:40.916  
00:08:40.916  real	0m2.268s
00:08:40.916  user	0m2.535s
00:08:40.916  sys	0m0.720s
00:08:40.916   05:54:01 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:40.916  ************************************
00:08:40.916  END TEST rpc
00:08:40.916  ************************************
00:08:40.916   05:54:01 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:40.916   05:54:01  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:40.916   05:54:01  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:40.916   05:54:01  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:40.916   05:54:01  -- common/autotest_common.sh@10 -- # set +x
00:08:40.916  ************************************
00:08:40.916  START TEST skip_rpc
00:08:40.916  ************************************
00:08:40.916   05:54:01 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:41.175  * Looking for test storage...
00:08:41.175  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:41.175    05:54:01 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:41.175     05:54:01 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:41.175     05:54:01 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@345 -- # : 1
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:41.175     05:54:02 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:41.175    05:54:02 skip_rpc -- scripts/common.sh@368 -- # return 0
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:41.175  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.175  		--rc genhtml_branch_coverage=1
00:08:41.175  		--rc genhtml_function_coverage=1
00:08:41.175  		--rc genhtml_legend=1
00:08:41.175  		--rc geninfo_all_blocks=1
00:08:41.175  		--rc geninfo_unexecuted_blocks=1
00:08:41.175  		
00:08:41.175  		'
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:41.175  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.175  		--rc genhtml_branch_coverage=1
00:08:41.175  		--rc genhtml_function_coverage=1
00:08:41.175  		--rc genhtml_legend=1
00:08:41.175  		--rc geninfo_all_blocks=1
00:08:41.175  		--rc geninfo_unexecuted_blocks=1
00:08:41.175  		
00:08:41.175  		'
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:41.175  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.175  		--rc genhtml_branch_coverage=1
00:08:41.175  		--rc genhtml_function_coverage=1
00:08:41.175  		--rc genhtml_legend=1
00:08:41.175  		--rc geninfo_all_blocks=1
00:08:41.175  		--rc geninfo_unexecuted_blocks=1
00:08:41.175  		
00:08:41.175  		'
00:08:41.175    05:54:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:41.175  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:41.175  		--rc genhtml_branch_coverage=1
00:08:41.175  		--rc genhtml_function_coverage=1
00:08:41.175  		--rc genhtml_legend=1
00:08:41.175  		--rc geninfo_all_blocks=1
00:08:41.175  		--rc geninfo_unexecuted_blocks=1
00:08:41.175  		
00:08:41.175  		'
00:08:41.175   05:54:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:41.175   05:54:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:08:41.176   05:54:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:08:41.176   05:54:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:41.176   05:54:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:41.176   05:54:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:41.176  ************************************
00:08:41.176  START TEST skip_rpc
00:08:41.176  ************************************
00:08:41.176   05:54:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:08:41.176   05:54:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=77982
00:08:41.176   05:54:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:41.176   05:54:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:08:41.176   05:54:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:08:41.176  [2024-11-18 05:54:02.099112] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:41.176  [2024-11-18 05:54:02.099771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77982 ]
00:08:41.435  [2024-11-18 05:54:02.254170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:41.435  [2024-11-18 05:54:02.279326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:46.706    05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 77982
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 77982 ']'
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 77982
00:08:46.706    05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:46.706    05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77982
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:46.706  killing process with pid 77982
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77982'
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 77982
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 77982
00:08:46.706  
00:08:46.706  real	0m5.371s
00:08:46.706  user	0m5.048s
00:08:46.706  sys	0m0.255s
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:46.706   05:54:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:46.706  ************************************
00:08:46.706  END TEST skip_rpc
00:08:46.706  ************************************
00:08:46.706   05:54:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:08:46.706   05:54:07 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:46.706   05:54:07 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:46.706   05:54:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:46.706  ************************************
00:08:46.706  START TEST skip_rpc_with_json
00:08:46.706  ************************************
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=78064
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 78064
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 78064 ']'
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:46.706  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:46.706   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:46.706  [2024-11-18 05:54:07.529743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:46.706  [2024-11-18 05:54:07.529962] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78064 ]
00:08:46.965  [2024-11-18 05:54:07.687749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:46.965  [2024-11-18 05:54:07.716137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:46.965  [2024-11-18 05:54:07.933219] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:08:46.965  request:
00:08:46.965  {
00:08:46.965  "trtype": "tcp",
00:08:46.965  "method": "nvmf_get_transports",
00:08:46.965  "req_id": 1
00:08:46.965  }
00:08:46.965  Got JSON-RPC error response
00:08:46.965  response:
00:08:46.965  {
00:08:46.965  "code": -19,
00:08:46.965  "message": "No such device"
00:08:46.965  }
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:46.965   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:47.225  [2024-11-18 05:54:07.945401] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:08:47.225   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:47.225   05:54:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:08:47.225   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:47.225   05:54:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:47.225   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:47.225   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:47.225  {
00:08:47.225  "subsystems": [
00:08:47.225  {
00:08:47.225  "subsystem": "scheduler",
00:08:47.225  "config": [
00:08:47.225  {
00:08:47.225  "method": "framework_set_scheduler",
00:08:47.225  "params": {
00:08:47.225  "name": "static"
00:08:47.225  }
00:08:47.225  }
00:08:47.225  ]
00:08:47.225  },
00:08:47.225  {
00:08:47.225  "subsystem": "vmd",
00:08:47.225  "config": []
00:08:47.225  },
00:08:47.225  {
00:08:47.225  "subsystem": "sock",
00:08:47.225  "config": [
00:08:47.225  {
00:08:47.225  "method": "sock_set_default_impl",
00:08:47.225  "params": {
00:08:47.225  "impl_name": "posix"
00:08:47.225  }
00:08:47.225  },
00:08:47.225  {
00:08:47.225  "method": "sock_impl_set_options",
00:08:47.225  "params": {
00:08:47.226  "impl_name": "ssl",
00:08:47.226  "recv_buf_size": 4096,
00:08:47.226  "send_buf_size": 4096,
00:08:47.226  "enable_recv_pipe": true,
00:08:47.226  "enable_quickack": false,
00:08:47.226  "enable_placement_id": 0,
00:08:47.226  "enable_zerocopy_send_server": true,
00:08:47.226  "enable_zerocopy_send_client": false,
00:08:47.226  "zerocopy_threshold": 0,
00:08:47.226  "tls_version": 0,
00:08:47.226  "enable_ktls": false
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "sock_impl_set_options",
00:08:47.226  "params": {
00:08:47.226  "impl_name": "posix",
00:08:47.226  "recv_buf_size": 2097152,
00:08:47.226  "send_buf_size": 2097152,
00:08:47.226  "enable_recv_pipe": true,
00:08:47.226  "enable_quickack": false,
00:08:47.226  "enable_placement_id": 0,
00:08:47.226  "enable_zerocopy_send_server": true,
00:08:47.226  "enable_zerocopy_send_client": false,
00:08:47.226  "zerocopy_threshold": 0,
00:08:47.226  "tls_version": 0,
00:08:47.226  "enable_ktls": false
00:08:47.226  }
00:08:47.226  }
00:08:47.226  ]
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "iobuf",
00:08:47.226  "config": [
00:08:47.226  {
00:08:47.226  "method": "iobuf_set_options",
00:08:47.226  "params": {
00:08:47.226  "small_pool_count": 8192,
00:08:47.226  "large_pool_count": 1024,
00:08:47.226  "small_bufsize": 8192,
00:08:47.226  "large_bufsize": 135168,
00:08:47.226  "enable_numa": false
00:08:47.226  }
00:08:47.226  }
00:08:47.226  ]
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "keyring",
00:08:47.226  "config": []
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "fsdev",
00:08:47.226  "config": [
00:08:47.226  {
00:08:47.226  "method": "fsdev_set_opts",
00:08:47.226  "params": {
00:08:47.226  "fsdev_io_pool_size": 65535,
00:08:47.226  "fsdev_io_cache_size": 256
00:08:47.226  }
00:08:47.226  }
00:08:47.226  ]
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "accel",
00:08:47.226  "config": [
00:08:47.226  {
00:08:47.226  "method": "accel_set_options",
00:08:47.226  "params": {
00:08:47.226  "small_cache_size": 128,
00:08:47.226  "large_cache_size": 16,
00:08:47.226  "task_count": 2048,
00:08:47.226  "sequence_count": 2048,
00:08:47.226  "buf_count": 2048
00:08:47.226  }
00:08:47.226  }
00:08:47.226  ]
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "bdev",
00:08:47.226  "config": [
00:08:47.226  {
00:08:47.226  "method": "bdev_set_options",
00:08:47.226  "params": {
00:08:47.226  "bdev_io_pool_size": 65535,
00:08:47.226  "bdev_io_cache_size": 256,
00:08:47.226  "bdev_auto_examine": true,
00:08:47.226  "iobuf_small_cache_size": 128,
00:08:47.226  "iobuf_large_cache_size": 16
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "bdev_raid_set_options",
00:08:47.226  "params": {
00:08:47.226  "process_window_size_kb": 1024,
00:08:47.226  "process_max_bandwidth_mb_sec": 0
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "bdev_nvme_set_options",
00:08:47.226  "params": {
00:08:47.226  "action_on_timeout": "none",
00:08:47.226  "timeout_us": 0,
00:08:47.226  "timeout_admin_us": 0,
00:08:47.226  "keep_alive_timeout_ms": 10000,
00:08:47.226  "arbitration_burst": 0,
00:08:47.226  "low_priority_weight": 0,
00:08:47.226  "medium_priority_weight": 0,
00:08:47.226  "high_priority_weight": 0,
00:08:47.226  "nvme_adminq_poll_period_us": 10000,
00:08:47.226  "nvme_ioq_poll_period_us": 0,
00:08:47.226  "io_queue_requests": 0,
00:08:47.226  "delay_cmd_submit": true,
00:08:47.226  "transport_retry_count": 4,
00:08:47.226  "bdev_retry_count": 3,
00:08:47.226  "transport_ack_timeout": 0,
00:08:47.226  "ctrlr_loss_timeout_sec": 0,
00:08:47.226  "reconnect_delay_sec": 0,
00:08:47.226  "fast_io_fail_timeout_sec": 0,
00:08:47.226  "disable_auto_failback": false,
00:08:47.226  "generate_uuids": false,
00:08:47.226  "transport_tos": 0,
00:08:47.226  "nvme_error_stat": false,
00:08:47.226  "rdma_srq_size": 0,
00:08:47.226  "io_path_stat": false,
00:08:47.226  "allow_accel_sequence": false,
00:08:47.226  "rdma_max_cq_size": 0,
00:08:47.226  "rdma_cm_event_timeout_ms": 0,
00:08:47.226  "dhchap_digests": [
00:08:47.226  "sha256",
00:08:47.226  "sha384",
00:08:47.226  "sha512"
00:08:47.226  ],
00:08:47.226  "dhchap_dhgroups": [
00:08:47.226  "null",
00:08:47.226  "ffdhe2048",
00:08:47.226  "ffdhe3072",
00:08:47.226  "ffdhe4096",
00:08:47.226  "ffdhe6144",
00:08:47.226  "ffdhe8192"
00:08:47.226  ]
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "bdev_nvme_set_hotplug",
00:08:47.226  "params": {
00:08:47.226  "period_us": 100000,
00:08:47.226  "enable": false
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "bdev_iscsi_set_options",
00:08:47.226  "params": {
00:08:47.226  "timeout_sec": 30
00:08:47.226  }
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "method": "bdev_wait_for_examine"
00:08:47.226  }
00:08:47.226  ]
00:08:47.226  },
00:08:47.226  {
00:08:47.226  "subsystem": "nvmf",
00:08:47.226  "config": [
00:08:47.226  {
00:08:47.226  "method": "nvmf_set_config",
00:08:47.226  "params": {
00:08:47.226  "discovery_filter": "match_any",
00:08:47.226  "admin_cmd_passthru": {
00:08:47.226  "identify_ctrlr": false
00:08:47.226  },
00:08:47.226  "dhchap_digests": [
00:08:47.226  "sha256",
00:08:47.226  "sha384",
00:08:47.226  "sha512"
00:08:47.226  ],
00:08:47.226  "dhchap_dhgroups": [
00:08:47.226  "null",
00:08:47.226  "ffdhe2048",
00:08:47.226  "ffdhe3072",
00:08:47.226  "ffdhe4096",
00:08:47.226  "ffdhe6144",
00:08:47.226  "ffdhe8192"
00:08:47.226  ]
00:08:47.227  }
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "method": "nvmf_set_max_subsystems",
00:08:47.227  "params": {
00:08:47.227  "max_subsystems": 1024
00:08:47.227  }
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "method": "nvmf_set_crdt",
00:08:47.227  "params": {
00:08:47.227  "crdt1": 0,
00:08:47.227  "crdt2": 0,
00:08:47.227  "crdt3": 0
00:08:47.227  }
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "method": "nvmf_create_transport",
00:08:47.227  "params": {
00:08:47.227  "trtype": "TCP",
00:08:47.227  "max_queue_depth": 128,
00:08:47.227  "max_io_qpairs_per_ctrlr": 127,
00:08:47.227  "in_capsule_data_size": 4096,
00:08:47.227  "max_io_size": 131072,
00:08:47.227  "io_unit_size": 131072,
00:08:47.227  "max_aq_depth": 128,
00:08:47.227  "num_shared_buffers": 511,
00:08:47.227  "buf_cache_size": 4294967295,
00:08:47.227  "dif_insert_or_strip": false,
00:08:47.227  "zcopy": false,
00:08:47.227  "c2h_success": true,
00:08:47.227  "sock_priority": 0,
00:08:47.227  "abort_timeout_sec": 1,
00:08:47.227  "ack_timeout": 0,
00:08:47.227  "data_wr_pool_size": 0
00:08:47.227  }
00:08:47.227  }
00:08:47.227  ]
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "nbd",
00:08:47.227  "config": []
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "ublk",
00:08:47.227  "config": []
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "vhost_blk",
00:08:47.227  "config": []
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "scsi",
00:08:47.227  "config": null
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "iscsi",
00:08:47.227  "config": [
00:08:47.227  {
00:08:47.227  "method": "iscsi_set_options",
00:08:47.227  "params": {
00:08:47.227  "node_base": "iqn.2016-06.io.spdk",
00:08:47.227  "max_sessions": 128,
00:08:47.227  "max_connections_per_session": 2,
00:08:47.227  "max_queue_depth": 64,
00:08:47.227  "default_time2wait": 2,
00:08:47.227  "default_time2retain": 20,
00:08:47.227  "first_burst_length": 8192,
00:08:47.227  "immediate_data": true,
00:08:47.227  "allow_duplicated_isid": false,
00:08:47.227  "error_recovery_level": 0,
00:08:47.227  "nop_timeout": 60,
00:08:47.227  "nop_in_interval": 30,
00:08:47.227  "disable_chap": false,
00:08:47.227  "require_chap": false,
00:08:47.227  "mutual_chap": false,
00:08:47.227  "chap_group": 0,
00:08:47.227  "max_large_datain_per_connection": 64,
00:08:47.227  "max_r2t_per_connection": 4,
00:08:47.227  "pdu_pool_size": 36864,
00:08:47.227  "immediate_data_pool_size": 16384,
00:08:47.227  "data_out_pool_size": 2048
00:08:47.227  }
00:08:47.227  }
00:08:47.227  ]
00:08:47.227  },
00:08:47.227  {
00:08:47.227  "subsystem": "vhost_scsi",
00:08:47.227  "config": []
00:08:47.227  }
00:08:47.227  ]
00:08:47.227  }
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 78064
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 78064 ']'
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 78064
00:08:47.227    05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:47.227    05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78064
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:47.227  killing process with pid 78064
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78064'
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 78064
00:08:47.227   05:54:08 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 78064
00:08:47.795   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=78085
00:08:47.795   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:47.795   05:54:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 78085
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 78085 ']'
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 78085
00:08:53.074    05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:53.074    05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78085
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:53.074  killing process with pid 78085
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78085'
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 78085
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 78085
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:08:53.074  
00:08:53.074  real	0m6.365s
00:08:53.074  user	0m5.947s
00:08:53.074  sys	0m0.600s
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:08:53.074  ************************************
00:08:53.074  END TEST skip_rpc_with_json
00:08:53.074  ************************************
00:08:53.074   05:54:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:08:53.074   05:54:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:53.074   05:54:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.074   05:54:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:53.074  ************************************
00:08:53.074  START TEST skip_rpc_with_delay
00:08:53.074  ************************************
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:53.074    05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:53.074    05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:08:53.074   05:54:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:08:53.074  [2024-11-18 05:54:13.951721] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:08:53.074  ************************************
00:08:53.074  END TEST skip_rpc_with_delay
00:08:53.074  ************************************
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:53.074  
00:08:53.074  real	0m0.137s
00:08:53.074  user	0m0.071s
00:08:53.074  sys	0m0.067s
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:53.074   05:54:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:08:53.406    05:54:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:08:53.406   05:54:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:08:53.406   05:54:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:08:53.406   05:54:14 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:53.406   05:54:14 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.406   05:54:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:53.406  ************************************
00:08:53.406  START TEST exit_on_failed_rpc_init
00:08:53.406  ************************************
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=78191
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 78191
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 78191 ']'
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:53.406  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:53.406   05:54:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:08:53.406  [2024-11-18 05:54:14.142210] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:53.407  [2024-11-18 05:54:14.142406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78191 ]
00:08:53.407  [2024-11-18 05:54:14.297865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:53.407  [2024-11-18 05:54:14.320647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:54.346    05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:54.346    05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:08:54.346   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:08:54.346  [2024-11-18 05:54:15.127735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:54.346  [2024-11-18 05:54:15.127926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78209 ]
00:08:54.346  [2024-11-18 05:54:15.278733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:54.346  [2024-11-18 05:54:15.305609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:08:54.346  [2024-11-18 05:54:15.305788] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:08:54.346  [2024-11-18 05:54:15.305830] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:08:54.346  [2024-11-18 05:54:15.305847] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 78191
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 78191 ']'
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 78191
00:08:54.606    05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:54.606    05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78191
00:08:54.606  killing process with pid 78191
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78191'
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 78191
00:08:54.606   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 78191
00:08:54.865  
00:08:54.865  real	0m1.626s
00:08:54.865  user	0m1.858s
00:08:54.865  sys	0m0.403s
00:08:54.865   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:54.865  ************************************
00:08:54.865  END TEST exit_on_failed_rpc_init
00:08:54.865  ************************************
00:08:54.865   05:54:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:08:54.865   05:54:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:54.865  ************************************
00:08:54.865  END TEST skip_rpc
00:08:54.865  ************************************
00:08:54.865  
00:08:54.865  real	0m13.916s
00:08:54.865  user	0m13.106s
00:08:54.865  sys	0m1.562s
00:08:54.865   05:54:15 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:54.865   05:54:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:54.865   05:54:15  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:08:54.865   05:54:15  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:54.865   05:54:15  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:54.865   05:54:15  -- common/autotest_common.sh@10 -- # set +x
00:08:54.865  ************************************
00:08:54.865  START TEST rpc_client
00:08:54.865  ************************************
00:08:54.865   05:54:15 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:08:55.125  * Looking for test storage...
00:08:55.125  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:55.125     05:54:15 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:08:55.125     05:54:15 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@345 -- # : 1
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@353 -- # local d=1
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@355 -- # echo 1
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@353 -- # local d=2
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:55.125     05:54:15 rpc_client -- scripts/common.sh@355 -- # echo 2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:55.125    05:54:15 rpc_client -- scripts/common.sh@368 -- # return 0
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:55.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.125  		--rc genhtml_branch_coverage=1
00:08:55.125  		--rc genhtml_function_coverage=1
00:08:55.125  		--rc genhtml_legend=1
00:08:55.125  		--rc geninfo_all_blocks=1
00:08:55.125  		--rc geninfo_unexecuted_blocks=1
00:08:55.125  		
00:08:55.125  		'
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:55.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.125  		--rc genhtml_branch_coverage=1
00:08:55.125  		--rc genhtml_function_coverage=1
00:08:55.125  		--rc genhtml_legend=1
00:08:55.125  		--rc geninfo_all_blocks=1
00:08:55.125  		--rc geninfo_unexecuted_blocks=1
00:08:55.125  		
00:08:55.125  		'
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:55.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.125  		--rc genhtml_branch_coverage=1
00:08:55.125  		--rc genhtml_function_coverage=1
00:08:55.125  		--rc genhtml_legend=1
00:08:55.125  		--rc geninfo_all_blocks=1
00:08:55.125  		--rc geninfo_unexecuted_blocks=1
00:08:55.125  		
00:08:55.125  		'
00:08:55.125    05:54:15 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:55.125  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.125  		--rc genhtml_branch_coverage=1
00:08:55.125  		--rc genhtml_function_coverage=1
00:08:55.125  		--rc genhtml_legend=1
00:08:55.125  		--rc geninfo_all_blocks=1
00:08:55.125  		--rc geninfo_unexecuted_blocks=1
00:08:55.125  		
00:08:55.125  		'
00:08:55.125   05:54:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:08:55.125  OK
00:08:55.125   05:54:16 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:08:55.125  
00:08:55.125  real	0m0.229s
00:08:55.125  user	0m0.127s
00:08:55.125  sys	0m0.120s
00:08:55.125   05:54:16 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.125   05:54:16 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:08:55.125  ************************************
00:08:55.125  END TEST rpc_client
00:08:55.125  ************************************
00:08:55.125   05:54:16  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:08:55.125   05:54:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.125   05:54:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.125   05:54:16  -- common/autotest_common.sh@10 -- # set +x
00:08:55.125  ************************************
00:08:55.125  START TEST json_config
00:08:55.125  ************************************
00:08:55.125   05:54:16 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:55.386     05:54:16 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:55.386     05:54:16 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:55.386    05:54:16 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:55.386    05:54:16 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:08:55.386    05:54:16 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:08:55.386    05:54:16 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:08:55.386    05:54:16 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:08:55.386    05:54:16 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:08:55.386    05:54:16 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:55.386    05:54:16 json_config -- scripts/common.sh@344 -- # case "$op" in
00:08:55.386    05:54:16 json_config -- scripts/common.sh@345 -- # : 1
00:08:55.386    05:54:16 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:55.386    05:54:16 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:55.386     05:54:16 json_config -- scripts/common.sh@365 -- # decimal 1
00:08:55.386     05:54:16 json_config -- scripts/common.sh@353 -- # local d=1
00:08:55.386     05:54:16 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:55.386     05:54:16 json_config -- scripts/common.sh@355 -- # echo 1
00:08:55.386    05:54:16 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:08:55.386     05:54:16 json_config -- scripts/common.sh@366 -- # decimal 2
00:08:55.386     05:54:16 json_config -- scripts/common.sh@353 -- # local d=2
00:08:55.386     05:54:16 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:55.386     05:54:16 json_config -- scripts/common.sh@355 -- # echo 2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:08:55.386    05:54:16 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:55.386    05:54:16 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:55.386    05:54:16 json_config -- scripts/common.sh@368 -- # return 0
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:55.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.386  		--rc genhtml_branch_coverage=1
00:08:55.386  		--rc genhtml_function_coverage=1
00:08:55.386  		--rc genhtml_legend=1
00:08:55.386  		--rc geninfo_all_blocks=1
00:08:55.386  		--rc geninfo_unexecuted_blocks=1
00:08:55.386  		
00:08:55.386  		'
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:55.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.386  		--rc genhtml_branch_coverage=1
00:08:55.386  		--rc genhtml_function_coverage=1
00:08:55.386  		--rc genhtml_legend=1
00:08:55.386  		--rc geninfo_all_blocks=1
00:08:55.386  		--rc geninfo_unexecuted_blocks=1
00:08:55.386  		
00:08:55.386  		'
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:55.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.386  		--rc genhtml_branch_coverage=1
00:08:55.386  		--rc genhtml_function_coverage=1
00:08:55.386  		--rc genhtml_legend=1
00:08:55.386  		--rc geninfo_all_blocks=1
00:08:55.386  		--rc geninfo_unexecuted_blocks=1
00:08:55.386  		
00:08:55.386  		'
00:08:55.386    05:54:16 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:55.386  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:55.386  		--rc genhtml_branch_coverage=1
00:08:55.386  		--rc genhtml_function_coverage=1
00:08:55.386  		--rc genhtml_legend=1
00:08:55.386  		--rc geninfo_all_blocks=1
00:08:55.386  		--rc geninfo_unexecuted_blocks=1
00:08:55.386  		
00:08:55.386  		'
00:08:55.386   05:54:16 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:08:55.386     05:54:16 json_config -- nvmf/common.sh@7 -- # uname -s
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:08:55.386     05:54:16 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:08:55.386    05:54:16 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:08:55.386     05:54:16 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:08:55.386     05:54:16 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:08:55.386     05:54:16 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:08:55.386     05:54:16 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:08:55.386      05:54:16 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:55.387      05:54:16 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:55.387      05:54:16 json_config -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:55.387      05:54:16 json_config -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:55.387      05:54:16 json_config -- paths/export.sh@6 -- # export PATH
00:08:55.387      05:54:16 json_config -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@51 -- # : 0
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:08:55.387  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:08:55.387    05:54:16 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:08:55.387  INFO: JSON configuration test init
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:55.387   05:54:16 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:08:55.387   05:54:16 json_config -- json_config/common.sh@9 -- # local app=target
00:08:55.387   05:54:16 json_config -- json_config/common.sh@10 -- # shift
00:08:55.387  Waiting for target to run...
00:08:55.387  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:08:55.387   05:54:16 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:08:55.387   05:54:16 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:08:55.387   05:54:16 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:08:55.387   05:54:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:08:55.387   05:54:16 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:08:55.387   05:54:16 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=78352
00:08:55.387   05:54:16 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:08:55.387   05:54:16 json_config -- json_config/common.sh@25 -- # waitforlisten 78352 /var/tmp/spdk_tgt.sock
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@835 -- # '[' -z 78352 ']'
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:08:55.387   05:54:16 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:55.387   05:54:16 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:55.387  [2024-11-18 05:54:16.350556] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:08:55.387  [2024-11-18 05:54:16.350954] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78352 ]
00:08:55.955  [2024-11-18 05:54:16.679910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:55.955  [2024-11-18 05:54:16.693444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@868 -- # return 0
00:08:56.524   05:54:17 json_config -- json_config/common.sh@26 -- # echo ''
00:08:56.524  
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:56.524   05:54:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:08:56.524   05:54:17 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:08:56.524   05:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:08:56.783   05:54:17 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:56.783   05:54:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:08:56.783   05:54:17 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:08:56.783    05:54:17 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:08:56.783    05:54:17 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:08:56.783    05:54:17 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@51 -- # local get_types
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:08:57.041    05:54:17 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:08:57.041    05:54:17 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:08:57.041    05:54:17 json_config -- json_config/json_config.sh@54 -- # sort
00:08:57.041    05:54:17 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:08:57.041   05:54:17 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:08:57.041   05:54:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:08:57.041   05:54:17 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@62 -- # return 0
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@285 -- # [[ 1 -eq 1 ]]
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@286 -- # create_bdev_subsystem_config
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@112 -- # timing_enter create_bdev_subsystem_config
00:08:57.298   05:54:18 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:57.298   05:54:18 json_config -- common/autotest_common.sh@10 -- # set +x
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@114 -- # expected_notifications=()
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@114 -- # local expected_notifications
00:08:57.298   05:54:18 json_config -- json_config/json_config.sh@118 -- # expected_notifications+=($(get_notifications))
00:08:57.298    05:54:18 json_config -- json_config/json_config.sh@118 -- # get_notifications
00:08:57.298    05:54:18 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:08:57.298    05:54:18 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:08:57.298    05:54:18 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:08:57.298     05:54:18 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:08:57.298     05:54:18 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:08:57.298     05:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:08:57.557    05:54:18 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:08:57.557    05:54:18 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:08:57.557    05:54:18 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:08:57.557   05:54:18 json_config -- json_config/json_config.sh@120 -- # [[ 1 -eq 1 ]]
00:08:57.557   05:54:18 json_config -- json_config/json_config.sh@121 -- # local lvol_store_base_bdev=Nvme0n1
00:08:57.557   05:54:18 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:08:57.557   05:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:08:57.557  Nvme0n1p0 Nvme0n1p1
00:08:57.817   05:54:18 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_split_create Malloc0 3
00:08:57.817   05:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:08:58.075  [2024-11-18 05:54:18.814535] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:58.075  [2024-11-18 05:54:18.814648] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:08:58.075  
00:08:58.075   05:54:18 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:08:58.075   05:54:18 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:08:58.339  Malloc3
00:08:58.339   05:54:19 json_config -- json_config/json_config.sh@126 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:08:58.339   05:54:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:08:58.599  [2024-11-18 05:54:19.350762] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:08:58.599  [2024-11-18 05:54:19.351063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:58.599  [2024-11-18 05:54:19.351230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000009f80
00:08:58.599  [2024-11-18 05:54:19.351370] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:58.599  [2024-11-18 05:54:19.354158] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:58.599  [2024-11-18 05:54:19.354349] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:08:58.599  PTBdevFromMalloc3
00:08:58.599   05:54:19 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_null_create Null0 32 512
00:08:58.599   05:54:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:08:58.858  Null0
00:08:58.858   05:54:19 json_config -- json_config/json_config.sh@130 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:08:58.858   05:54:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:08:58.858  Malloc0
00:08:59.117   05:54:19 json_config -- json_config/json_config.sh@131 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:08:59.117   05:54:19 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:08:59.117  Malloc1
00:08:59.117   05:54:20 json_config -- json_config/json_config.sh@144 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:08:59.117   05:54:20 json_config -- json_config/json_config.sh@147 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:08:59.376  102400+0 records in
00:08:59.376  102400+0 records out
00:08:59.376  104857600 bytes (105 MB, 100 MiB) copied, 0.232096 s, 452 MB/s
00:08:59.376   05:54:20 json_config -- json_config/json_config.sh@148 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:08:59.376   05:54:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:08:59.635  aio_disk
00:08:59.635   05:54:20 json_config -- json_config/json_config.sh@149 -- # expected_notifications+=(bdev_register:aio_disk)
00:08:59.635   05:54:20 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:08:59.635   05:54:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:08:59.893  a0f393cb-5975-4641-beb8-603d324c470d
00:08:59.893   05:54:20 json_config -- json_config/json_config.sh@161 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:08:59.893    05:54:20 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:08:59.893    05:54:20 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:09:00.151    05:54:21 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:09:00.151    05:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:09:00.719    05:54:21 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:00.719    05:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:00.719    05:54:21 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:00.719    05:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@164 -- # [[ 0 -eq 1 ]]
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@179 -- # [[ 0 -eq 1 ]]
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@185 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@74 -- # local events_to_check
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@75 -- # local recorded_events
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@78 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@78 -- # sort
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@78 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19
00:09:00.979   05:54:21 json_config -- json_config/json_config.sh@79 -- # recorded_events=($(get_notifications | sort))
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@79 -- # get_notifications
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@79 -- # sort
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:00.979    05:54:21 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:00.979     05:54:21 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:09:00.979     05:54:21 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:09:00.979     05:54:21 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p1
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p0
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc3
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:PTBdevFromMalloc3
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Null0
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p2
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p1
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p0
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc1
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:aio_disk
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.238    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@81 -- # [[ bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19 bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\5\0\5\c\7\f\8\d\-\2\4\4\7\-\4\5\d\6\-\8\4\0\4\-\0\1\3\c\e\4\3\3\0\a\1\9\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\7\1\9\6\4\f\8\3\-\b\4\c\3\-\4\5\1\1\-\a\9\4\a\-\8\c\f\8\c\5\8\7\a\1\9\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\b\5\8\a\1\8\3\-\d\d\e\c\-\4\1\6\b\-\a\3\b\e\-\4\4\d\d\f\d\b\4\0\4\0\e\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\9\d\1\0\e\4\e\5\-\0\e\a\4\-\4\c\b\2\-\b\5\6\8\-\8\2\c\8\a\b\b\7\e\a\f\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]]
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@93 -- # cat
00:09:01.239    05:54:22 json_config -- json_config/json_config.sh@93 -- # printf ' %s\n' bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19 bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk
00:09:01.239  Expected events matched:
00:09:01.239   bdev_register:505c7f8d-2447-45d6-8404-013ce4330a19
00:09:01.239   bdev_register:71964f83-b4c3-4511-a94a-8cf8c587a19f
00:09:01.239   bdev_register:8b58a183-ddec-416b-a3be-44ddfdb4040e
00:09:01.239   bdev_register:9d10e4e5-0ea4-4cb2-b568-82c8abb7eaff
00:09:01.239   bdev_register:Malloc0
00:09:01.239   bdev_register:Malloc0p0
00:09:01.239   bdev_register:Malloc0p1
00:09:01.239   bdev_register:Malloc0p2
00:09:01.239   bdev_register:Malloc1
00:09:01.239   bdev_register:Malloc3
00:09:01.239   bdev_register:Null0
00:09:01.239   bdev_register:Nvme0n1
00:09:01.239   bdev_register:Nvme0n1p0
00:09:01.239   bdev_register:Nvme0n1p1
00:09:01.239   bdev_register:PTBdevFromMalloc3
00:09:01.239   bdev_register:aio_disk
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@187 -- # timing_exit create_bdev_subsystem_config
00:09:01.239   05:54:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:01.239   05:54:22 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]]
00:09:01.239   05:54:22 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:09:01.239   05:54:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:01.239   05:54:22 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.497   05:54:22 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:09:01.497   05:54:22 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:01.497   05:54:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:01.757  MallocBdevForConfigChangeCheck
00:09:01.757   05:54:22 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:09:01.757   05:54:22 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:01.757   05:54:22 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:01.757   05:54:22 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:09:01.757   05:54:22 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:02.016  INFO: shutting down applications...
00:09:02.016   05:54:22 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:09:02.016   05:54:22 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:09:02.016   05:54:22 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:09:02.016   05:54:22 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:09:02.016   05:54:22 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:09:02.275  [2024-11-18 05:54:23.115601] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:09:02.535  Calling clear_vhost_scsi_subsystem
00:09:02.535  Calling clear_iscsi_subsystem
00:09:02.535  Calling clear_vhost_blk_subsystem
00:09:02.535  Calling clear_ublk_subsystem
00:09:02.535  Calling clear_nbd_subsystem
00:09:02.535  Calling clear_nvmf_subsystem
00:09:02.535  Calling clear_bdev_subsystem
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@350 -- # count=100
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:09:02.535   05:54:23 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:09:02.795   05:54:23 json_config -- json_config/json_config.sh@352 -- # break
00:09:02.795   05:54:23 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:09:02.795   05:54:23 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:09:02.795   05:54:23 json_config -- json_config/common.sh@31 -- # local app=target
00:09:02.795   05:54:23 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:02.795   05:54:23 json_config -- json_config/common.sh@35 -- # [[ -n 78352 ]]
00:09:02.796   05:54:23 json_config -- json_config/common.sh@38 -- # kill -SIGINT 78352
00:09:02.796   05:54:23 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:02.796   05:54:23 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:02.796   05:54:23 json_config -- json_config/common.sh@41 -- # kill -0 78352
00:09:02.796   05:54:23 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:03.364   05:54:24 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:03.364   05:54:24 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:03.364   05:54:24 json_config -- json_config/common.sh@41 -- # kill -0 78352
00:09:03.364  SPDK target shutdown done
00:09:03.364  INFO: relaunching applications...
00:09:03.364   05:54:24 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:03.364   05:54:24 json_config -- json_config/common.sh@43 -- # break
00:09:03.364   05:54:24 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:03.364   05:54:24 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:03.364   05:54:24 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:09:03.364   05:54:24 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:03.364   05:54:24 json_config -- json_config/common.sh@9 -- # local app=target
00:09:03.364   05:54:24 json_config -- json_config/common.sh@10 -- # shift
00:09:03.364   05:54:24 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:03.364   05:54:24 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:03.364   05:54:24 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:03.364   05:54:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:03.364   05:54:24 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:03.364   05:54:24 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=78595
00:09:03.364   05:54:24 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:03.364  Waiting for target to run...
00:09:03.364   05:54:24 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:03.364   05:54:24 json_config -- json_config/common.sh@25 -- # waitforlisten 78595 /var/tmp/spdk_tgt.sock
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@835 -- # '[' -z 78595 ']'
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:03.364  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:03.364   05:54:24 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:03.364  [2024-11-18 05:54:24.320292] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:03.364  [2024-11-18 05:54:24.320791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78595 ]
00:09:03.934  [2024-11-18 05:54:24.655781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:03.934  [2024-11-18 05:54:24.669511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:03.934  [2024-11-18 05:54:24.822823] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:03.934  [2024-11-18 05:54:24.822981] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:03.934  [2024-11-18 05:54:24.830801] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:03.934  [2024-11-18 05:54:24.830907] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:03.934  [2024-11-18 05:54:24.838799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:03.934  [2024-11-18 05:54:24.838872] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:09:03.934  [2024-11-18 05:54:24.838892] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:09:04.193  [2024-11-18 05:54:24.927163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:04.193  [2024-11-18 05:54:24.927240] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:04.193  [2024-11-18 05:54:24.927263] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:09:04.193  [2024-11-18 05:54:24.927276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:04.193  [2024-11-18 05:54:24.927673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:04.193  [2024-11-18 05:54:24.927753] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:09:04.453  
00:09:04.453  INFO: Checking if target configuration is the same...
00:09:04.453   05:54:25 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:04.453   05:54:25 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:04.453   05:54:25 json_config -- json_config/common.sh@26 -- # echo ''
00:09:04.453   05:54:25 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:09:04.453   05:54:25 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:09:04.453   05:54:25 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:04.453    05:54:25 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:09:04.453    05:54:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:04.453  + '[' 2 -ne 2 ']'
00:09:04.453  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:04.453  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:04.453  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:04.453  +++ basename /dev/fd/62
00:09:04.453  ++ mktemp /tmp/62.XXX
00:09:04.453  + tmp_file_1=/tmp/62.qWL
00:09:04.453  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:04.453  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:04.453  + tmp_file_2=/tmp/spdk_tgt_config.json.yAh
00:09:04.453  + ret=0
00:09:04.453  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:04.712  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:04.971  + diff -u /tmp/62.qWL /tmp/spdk_tgt_config.json.yAh
00:09:04.971  INFO: JSON config files are the same
00:09:04.971  + echo 'INFO: JSON config files are the same'
00:09:04.971  + rm /tmp/62.qWL /tmp/spdk_tgt_config.json.yAh
00:09:04.971  + exit 0
00:09:04.971  INFO: changing configuration and checking if this can be detected...
00:09:04.971   05:54:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:09:04.971   05:54:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:09:04.971   05:54:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:04.971   05:54:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:05.231    05:54:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:09:05.231   05:54:25 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:05.231    05:54:25 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:05.231  + '[' 2 -ne 2 ']'
00:09:05.231  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:05.231  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:05.231  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:05.231  +++ basename /dev/fd/62
00:09:05.231  ++ mktemp /tmp/62.XXX
00:09:05.231  + tmp_file_1=/tmp/62.iqK
00:09:05.231  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:05.231  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:05.231  + tmp_file_2=/tmp/spdk_tgt_config.json.Xjw
00:09:05.231  + ret=0
00:09:05.231  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:05.490  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:05.490  + diff -u /tmp/62.iqK /tmp/spdk_tgt_config.json.Xjw
00:09:05.490  + ret=1
00:09:05.490  + echo '=== Start of file: /tmp/62.iqK ==='
00:09:05.490  + cat /tmp/62.iqK
00:09:05.490  + echo '=== End of file: /tmp/62.iqK ==='
00:09:05.490  + echo ''
00:09:05.490  + echo '=== Start of file: /tmp/spdk_tgt_config.json.Xjw ==='
00:09:05.490  + cat /tmp/spdk_tgt_config.json.Xjw
00:09:05.490  + echo '=== End of file: /tmp/spdk_tgt_config.json.Xjw ==='
00:09:05.490  + echo ''
00:09:05.490  + rm /tmp/62.iqK /tmp/spdk_tgt_config.json.Xjw
00:09:05.490  + exit 1
00:09:05.490  INFO: configuration change detected.
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:09:05.490   05:54:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:05.490   05:54:26 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@324 -- # [[ -n 78595 ]]
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:09:05.490   05:54:26 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:05.490   05:54:26 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@193 -- # [[ 1 -eq 1 ]]
00:09:05.490   05:54:26 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:09:05.490   05:54:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:09:06.058   05:54:26 json_config -- json_config/json_config.sh@195 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:09:06.058   05:54:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:09:06.058   05:54:26 json_config -- json_config/json_config.sh@196 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:09:06.058   05:54:26 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:09:06.317   05:54:27 json_config -- json_config/json_config.sh@197 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:09:06.317   05:54:27 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:09:06.577    05:54:27 json_config -- json_config/json_config.sh@200 -- # uname -s
00:09:06.577   05:54:27 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:09:06.577   05:54:27 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:09:06.577   05:54:27 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:09:06.577   05:54:27 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:06.577   05:54:27 json_config -- json_config/json_config.sh@330 -- # killprocess 78595
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@954 -- # '[' -z 78595 ']'
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@958 -- # kill -0 78595
00:09:06.577    05:54:27 json_config -- common/autotest_common.sh@959 -- # uname
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:06.577    05:54:27 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78595
00:09:06.577  killing process with pid 78595
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78595'
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@973 -- # kill 78595
00:09:06.577   05:54:27 json_config -- common/autotest_common.sh@978 -- # wait 78595
00:09:06.836   05:54:27 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:06.836   05:54:27 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:09:06.836   05:54:27 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:06.836   05:54:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:06.836  INFO: Success
00:09:06.836   05:54:27 json_config -- json_config/json_config.sh@335 -- # return 0
00:09:06.836   05:54:27 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:09:06.836  ************************************
00:09:06.836  END TEST json_config
00:09:06.836  ************************************
00:09:06.836  
00:09:06.836  real	0m11.705s
00:09:06.836  user	0m17.971s
00:09:06.836  sys	0m2.228s
00:09:06.836   05:54:27 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:06.836   05:54:27 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:07.096   05:54:27  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:07.096   05:54:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:07.096   05:54:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:07.096   05:54:27  -- common/autotest_common.sh@10 -- # set +x
00:09:07.096  ************************************
00:09:07.096  START TEST json_config_extra_key
00:09:07.096  ************************************
00:09:07.096   05:54:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:07.096     05:54:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:09:07.096     05:54:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:07.096     05:54:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:07.096    05:54:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:07.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:07.096  		--rc genhtml_branch_coverage=1
00:09:07.096  		--rc genhtml_function_coverage=1
00:09:07.096  		--rc genhtml_legend=1
00:09:07.096  		--rc geninfo_all_blocks=1
00:09:07.096  		--rc geninfo_unexecuted_blocks=1
00:09:07.096  		
00:09:07.096  		'
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:07.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:07.096  		--rc genhtml_branch_coverage=1
00:09:07.096  		--rc genhtml_function_coverage=1
00:09:07.096  		--rc genhtml_legend=1
00:09:07.096  		--rc geninfo_all_blocks=1
00:09:07.096  		--rc geninfo_unexecuted_blocks=1
00:09:07.096  		
00:09:07.096  		'
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:07.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:07.096  		--rc genhtml_branch_coverage=1
00:09:07.096  		--rc genhtml_function_coverage=1
00:09:07.096  		--rc genhtml_legend=1
00:09:07.096  		--rc geninfo_all_blocks=1
00:09:07.096  		--rc geninfo_unexecuted_blocks=1
00:09:07.096  		
00:09:07.096  		'
00:09:07.096    05:54:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:07.096  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:07.096  		--rc genhtml_branch_coverage=1
00:09:07.096  		--rc genhtml_function_coverage=1
00:09:07.096  		--rc genhtml_legend=1
00:09:07.096  		--rc geninfo_all_blocks=1
00:09:07.096  		--rc geninfo_unexecuted_blocks=1
00:09:07.096  		
00:09:07.096  		'
00:09:07.096   05:54:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:07.097     05:54:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:07.097    05:54:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:07.097     05:54:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=7bcc7b4b-21e3-45fa-8e12-040cf09ad907
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:07.097     05:54:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:09:07.097     05:54:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:07.097     05:54:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:07.097     05:54:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@6 -- # export PATH
00:09:07.097      05:54:28 json_config_extra_key -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:07.097  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:07.097    05:54:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:09:07.097  INFO: launching applications...
00:09:07.097  Waiting for target to run...
00:09:07.097  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:09:07.097   05:54:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=78764
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 78764 /var/tmp/spdk_tgt.sock
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 78764 ']'
00:09:07.097   05:54:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:07.097   05:54:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:07.357  [2024-11-18 05:54:28.092009] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:07.357  [2024-11-18 05:54:28.092229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78764 ]
00:09:07.617  [2024-11-18 05:54:28.419658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:07.617  [2024-11-18 05:54:28.433174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:08.187   05:54:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:08.187   05:54:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:09:08.187  
00:09:08.187  INFO: shutting down applications...
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:09:08.187   05:54:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:09:08.187   05:54:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 78764 ]]
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 78764
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 78764
00:09:08.187   05:54:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 78764
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@43 -- # break
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:08.753  SPDK target shutdown done
00:09:08.753   05:54:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:08.753  Success
00:09:08.753   05:54:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:09:08.753  
00:09:08.753  real	0m1.737s
00:09:08.753  user	0m1.567s
00:09:08.753  sys	0m0.418s
00:09:08.753   05:54:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:08.753  ************************************
00:09:08.753  END TEST json_config_extra_key
00:09:08.753  ************************************
00:09:08.753   05:54:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:08.753   05:54:29  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:08.753   05:54:29  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:08.753   05:54:29  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:08.753   05:54:29  -- common/autotest_common.sh@10 -- # set +x
00:09:08.753  ************************************
00:09:08.753  START TEST alias_rpc
00:09:08.753  ************************************
00:09:08.753   05:54:29 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:08.753  * Looking for test storage...
00:09:08.753  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:09:08.753    05:54:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:08.753     05:54:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:09:08.753     05:54:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@345 -- # : 1
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:09.013     05:54:29 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:09.013    05:54:29 alias_rpc -- scripts/common.sh@368 -- # return 0
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:09.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.013  		--rc genhtml_branch_coverage=1
00:09:09.013  		--rc genhtml_function_coverage=1
00:09:09.013  		--rc genhtml_legend=1
00:09:09.013  		--rc geninfo_all_blocks=1
00:09:09.013  		--rc geninfo_unexecuted_blocks=1
00:09:09.013  		
00:09:09.013  		'
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:09.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.013  		--rc genhtml_branch_coverage=1
00:09:09.013  		--rc genhtml_function_coverage=1
00:09:09.013  		--rc genhtml_legend=1
00:09:09.013  		--rc geninfo_all_blocks=1
00:09:09.013  		--rc geninfo_unexecuted_blocks=1
00:09:09.013  		
00:09:09.013  		'
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:09.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.013  		--rc genhtml_branch_coverage=1
00:09:09.013  		--rc genhtml_function_coverage=1
00:09:09.013  		--rc genhtml_legend=1
00:09:09.013  		--rc geninfo_all_blocks=1
00:09:09.013  		--rc geninfo_unexecuted_blocks=1
00:09:09.013  		
00:09:09.013  		'
00:09:09.013    05:54:29 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:09.013  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:09.013  		--rc genhtml_branch_coverage=1
00:09:09.013  		--rc genhtml_function_coverage=1
00:09:09.013  		--rc genhtml_legend=1
00:09:09.013  		--rc geninfo_all_blocks=1
00:09:09.013  		--rc geninfo_unexecuted_blocks=1
00:09:09.013  		
00:09:09.013  		'
00:09:09.013   05:54:29 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:09.013   05:54:29 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=78840
00:09:09.013   05:54:29 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:09.013   05:54:29 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 78840
00:09:09.013   05:54:29 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 78840 ']'
00:09:09.013   05:54:29 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:09.013   05:54:29 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:09.013  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:09.014   05:54:29 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:09.014   05:54:29 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:09.014   05:54:29 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:09.014  [2024-11-18 05:54:29.897069] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:09.014  [2024-11-18 05:54:29.897268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78840 ]
00:09:09.273  [2024-11-18 05:54:30.052351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:09.273  [2024-11-18 05:54:30.072862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:09.841   05:54:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:09.841   05:54:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:09.841   05:54:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:09:10.100   05:54:31 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 78840
00:09:10.100   05:54:31 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 78840 ']'
00:09:10.100   05:54:31 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 78840
00:09:10.100    05:54:31 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:10.359    05:54:31 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78840
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:10.359  killing process with pid 78840
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78840'
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@973 -- # kill 78840
00:09:10.359   05:54:31 alias_rpc -- common/autotest_common.sh@978 -- # wait 78840
00:09:10.618  
00:09:10.619  real	0m1.771s
00:09:10.619  user	0m2.000s
00:09:10.619  sys	0m0.419s
00:09:10.619   05:54:31 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:10.619   05:54:31 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:10.619  ************************************
00:09:10.619  END TEST alias_rpc
00:09:10.619  ************************************
00:09:10.619   05:54:31  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:09:10.619   05:54:31  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:10.619   05:54:31  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:10.619   05:54:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:10.619   05:54:31  -- common/autotest_common.sh@10 -- # set +x
00:09:10.619  ************************************
00:09:10.619  START TEST spdkcli_tcp
00:09:10.619  ************************************
00:09:10.619   05:54:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:10.619  * Looking for test storage...
00:09:10.619  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:09:10.619    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:10.619     05:54:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:09:10.619     05:54:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:10.878     05:54:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:10.878    05:54:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.878  		--rc genhtml_branch_coverage=1
00:09:10.878  		--rc genhtml_function_coverage=1
00:09:10.878  		--rc genhtml_legend=1
00:09:10.878  		--rc geninfo_all_blocks=1
00:09:10.878  		--rc geninfo_unexecuted_blocks=1
00:09:10.878  		
00:09:10.878  		'
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.878  		--rc genhtml_branch_coverage=1
00:09:10.878  		--rc genhtml_function_coverage=1
00:09:10.878  		--rc genhtml_legend=1
00:09:10.878  		--rc geninfo_all_blocks=1
00:09:10.878  		--rc geninfo_unexecuted_blocks=1
00:09:10.878  		
00:09:10.878  		'
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.878  		--rc genhtml_branch_coverage=1
00:09:10.878  		--rc genhtml_function_coverage=1
00:09:10.878  		--rc genhtml_legend=1
00:09:10.878  		--rc geninfo_all_blocks=1
00:09:10.878  		--rc geninfo_unexecuted_blocks=1
00:09:10.878  		
00:09:10.878  		'
00:09:10.878    05:54:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:10.878  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:10.878  		--rc genhtml_branch_coverage=1
00:09:10.878  		--rc genhtml_function_coverage=1
00:09:10.878  		--rc genhtml_legend=1
00:09:10.878  		--rc geninfo_all_blocks=1
00:09:10.878  		--rc geninfo_unexecuted_blocks=1
00:09:10.878  		
00:09:10.878  		'
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:09:10.878    05:54:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:09:10.878    05:54:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=78925
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:10.878   05:54:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 78925
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 78925 ']'
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:10.878  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:10.878   05:54:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:10.878  [2024-11-18 05:54:31.694095] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:10.878  [2024-11-18 05:54:31.694268] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78925 ]
00:09:10.878  [2024-11-18 05:54:31.850037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:11.137  [2024-11-18 05:54:31.876858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:11.138  [2024-11-18 05:54:31.876859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:11.138   05:54:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:11.138   05:54:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:09:11.138   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=78929
00:09:11.138   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:11.138   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:11.397  [
00:09:11.397    "spdk_get_version",
00:09:11.397    "rpc_get_methods",
00:09:11.397    "notify_get_notifications",
00:09:11.397    "notify_get_types",
00:09:11.397    "trace_get_info",
00:09:11.397    "trace_get_tpoint_group_mask",
00:09:11.397    "trace_disable_tpoint_group",
00:09:11.397    "trace_enable_tpoint_group",
00:09:11.397    "trace_clear_tpoint_mask",
00:09:11.397    "trace_set_tpoint_mask",
00:09:11.397    "fsdev_set_opts",
00:09:11.397    "fsdev_get_opts",
00:09:11.397    "framework_get_pci_devices",
00:09:11.397    "framework_get_config",
00:09:11.397    "framework_get_subsystems",
00:09:11.397    "keyring_get_keys",
00:09:11.397    "iobuf_get_stats",
00:09:11.397    "iobuf_set_options",
00:09:11.397    "sock_get_default_impl",
00:09:11.397    "sock_set_default_impl",
00:09:11.397    "sock_impl_set_options",
00:09:11.397    "sock_impl_get_options",
00:09:11.397    "vmd_rescan",
00:09:11.397    "vmd_remove_device",
00:09:11.397    "vmd_enable",
00:09:11.397    "accel_get_stats",
00:09:11.397    "accel_set_options",
00:09:11.397    "accel_set_driver",
00:09:11.397    "accel_crypto_key_destroy",
00:09:11.397    "accel_crypto_keys_get",
00:09:11.397    "accel_crypto_key_create",
00:09:11.397    "accel_assign_opc",
00:09:11.397    "accel_get_module_info",
00:09:11.397    "accel_get_opc_assignments",
00:09:11.397    "bdev_get_histogram",
00:09:11.397    "bdev_enable_histogram",
00:09:11.397    "bdev_set_qos_limit",
00:09:11.397    "bdev_set_qd_sampling_period",
00:09:11.397    "bdev_get_bdevs",
00:09:11.397    "bdev_reset_iostat",
00:09:11.397    "bdev_get_iostat",
00:09:11.397    "bdev_examine",
00:09:11.397    "bdev_wait_for_examine",
00:09:11.397    "bdev_set_options",
00:09:11.397    "scsi_get_devices",
00:09:11.397    "thread_set_cpumask",
00:09:11.397    "scheduler_set_options",
00:09:11.397    "framework_get_governor",
00:09:11.397    "framework_get_scheduler",
00:09:11.397    "framework_set_scheduler",
00:09:11.397    "framework_get_reactors",
00:09:11.397    "thread_get_io_channels",
00:09:11.397    "thread_get_pollers",
00:09:11.397    "thread_get_stats",
00:09:11.397    "framework_monitor_context_switch",
00:09:11.397    "spdk_kill_instance",
00:09:11.397    "log_enable_timestamps",
00:09:11.397    "log_get_flags",
00:09:11.397    "log_clear_flag",
00:09:11.397    "log_set_flag",
00:09:11.397    "log_get_level",
00:09:11.397    "log_set_level",
00:09:11.397    "log_get_print_level",
00:09:11.397    "log_set_print_level",
00:09:11.397    "framework_enable_cpumask_locks",
00:09:11.397    "framework_disable_cpumask_locks",
00:09:11.397    "framework_wait_init",
00:09:11.397    "framework_start_init",
00:09:11.397    "virtio_blk_create_transport",
00:09:11.397    "virtio_blk_get_transports",
00:09:11.397    "vhost_controller_set_coalescing",
00:09:11.397    "vhost_get_controllers",
00:09:11.397    "vhost_delete_controller",
00:09:11.397    "vhost_create_blk_controller",
00:09:11.397    "vhost_scsi_controller_remove_target",
00:09:11.397    "vhost_scsi_controller_add_target",
00:09:11.397    "vhost_start_scsi_controller",
00:09:11.397    "vhost_create_scsi_controller",
00:09:11.398    "ublk_recover_disk",
00:09:11.398    "ublk_get_disks",
00:09:11.398    "ublk_stop_disk",
00:09:11.398    "ublk_start_disk",
00:09:11.398    "ublk_destroy_target",
00:09:11.398    "ublk_create_target",
00:09:11.398    "nbd_get_disks",
00:09:11.398    "nbd_stop_disk",
00:09:11.398    "nbd_start_disk",
00:09:11.398    "env_dpdk_get_mem_stats",
00:09:11.398    "nvmf_stop_mdns_prr",
00:09:11.398    "nvmf_publish_mdns_prr",
00:09:11.398    "nvmf_subsystem_get_listeners",
00:09:11.398    "nvmf_subsystem_get_qpairs",
00:09:11.398    "nvmf_subsystem_get_controllers",
00:09:11.398    "nvmf_get_stats",
00:09:11.398    "nvmf_get_transports",
00:09:11.398    "nvmf_create_transport",
00:09:11.398    "nvmf_get_targets",
00:09:11.398    "nvmf_delete_target",
00:09:11.398    "nvmf_create_target",
00:09:11.398    "nvmf_subsystem_allow_any_host",
00:09:11.398    "nvmf_subsystem_set_keys",
00:09:11.398    "nvmf_subsystem_remove_host",
00:09:11.398    "nvmf_subsystem_add_host",
00:09:11.398    "nvmf_ns_remove_host",
00:09:11.398    "nvmf_ns_add_host",
00:09:11.398    "nvmf_subsystem_remove_ns",
00:09:11.398    "nvmf_subsystem_set_ns_ana_group",
00:09:11.398    "nvmf_subsystem_add_ns",
00:09:11.398    "nvmf_subsystem_listener_set_ana_state",
00:09:11.398    "nvmf_discovery_get_referrals",
00:09:11.398    "nvmf_discovery_remove_referral",
00:09:11.398    "nvmf_discovery_add_referral",
00:09:11.398    "nvmf_subsystem_remove_listener",
00:09:11.398    "nvmf_subsystem_add_listener",
00:09:11.398    "nvmf_delete_subsystem",
00:09:11.398    "nvmf_create_subsystem",
00:09:11.398    "nvmf_get_subsystems",
00:09:11.398    "nvmf_set_crdt",
00:09:11.398    "nvmf_set_config",
00:09:11.398    "nvmf_set_max_subsystems",
00:09:11.398    "iscsi_get_histogram",
00:09:11.398    "iscsi_enable_histogram",
00:09:11.398    "iscsi_set_options",
00:09:11.398    "iscsi_get_auth_groups",
00:09:11.398    "iscsi_auth_group_remove_secret",
00:09:11.398    "iscsi_auth_group_add_secret",
00:09:11.398    "iscsi_delete_auth_group",
00:09:11.398    "iscsi_create_auth_group",
00:09:11.398    "iscsi_set_discovery_auth",
00:09:11.398    "iscsi_get_options",
00:09:11.398    "iscsi_target_node_request_logout",
00:09:11.398    "iscsi_target_node_set_redirect",
00:09:11.398    "iscsi_target_node_set_auth",
00:09:11.398    "iscsi_target_node_add_lun",
00:09:11.398    "iscsi_get_stats",
00:09:11.398    "iscsi_get_connections",
00:09:11.398    "iscsi_portal_group_set_auth",
00:09:11.398    "iscsi_start_portal_group",
00:09:11.398    "iscsi_delete_portal_group",
00:09:11.398    "iscsi_create_portal_group",
00:09:11.398    "iscsi_get_portal_groups",
00:09:11.398    "iscsi_delete_target_node",
00:09:11.398    "iscsi_target_node_remove_pg_ig_maps",
00:09:11.398    "iscsi_target_node_add_pg_ig_maps",
00:09:11.398    "iscsi_create_target_node",
00:09:11.398    "iscsi_get_target_nodes",
00:09:11.398    "iscsi_delete_initiator_group",
00:09:11.398    "iscsi_initiator_group_remove_initiators",
00:09:11.398    "iscsi_initiator_group_add_initiators",
00:09:11.398    "iscsi_create_initiator_group",
00:09:11.398    "iscsi_get_initiator_groups",
00:09:11.398    "fsdev_aio_delete",
00:09:11.398    "fsdev_aio_create",
00:09:11.398    "keyring_linux_set_options",
00:09:11.398    "keyring_file_remove_key",
00:09:11.398    "keyring_file_add_key",
00:09:11.398    "iaa_scan_accel_module",
00:09:11.398    "dsa_scan_accel_module",
00:09:11.398    "ioat_scan_accel_module",
00:09:11.398    "accel_error_inject_error",
00:09:11.398    "bdev_iscsi_delete",
00:09:11.398    "bdev_iscsi_create",
00:09:11.398    "bdev_iscsi_set_options",
00:09:11.398    "bdev_virtio_attach_controller",
00:09:11.398    "bdev_virtio_scsi_get_devices",
00:09:11.398    "bdev_virtio_detach_controller",
00:09:11.398    "bdev_virtio_blk_set_hotplug",
00:09:11.398    "bdev_ftl_set_property",
00:09:11.398    "bdev_ftl_get_properties",
00:09:11.398    "bdev_ftl_get_stats",
00:09:11.398    "bdev_ftl_unmap",
00:09:11.398    "bdev_ftl_unload",
00:09:11.398    "bdev_ftl_delete",
00:09:11.398    "bdev_ftl_load",
00:09:11.398    "bdev_ftl_create",
00:09:11.398    "bdev_aio_delete",
00:09:11.398    "bdev_aio_rescan",
00:09:11.398    "bdev_aio_create",
00:09:11.398    "blobfs_create",
00:09:11.398    "blobfs_detect",
00:09:11.398    "blobfs_set_cache_size",
00:09:11.398    "bdev_zone_block_delete",
00:09:11.398    "bdev_zone_block_create",
00:09:11.398    "bdev_delay_delete",
00:09:11.398    "bdev_delay_create",
00:09:11.398    "bdev_delay_update_latency",
00:09:11.398    "bdev_split_delete",
00:09:11.398    "bdev_split_create",
00:09:11.398    "bdev_error_inject_error",
00:09:11.398    "bdev_error_delete",
00:09:11.398    "bdev_error_create",
00:09:11.398    "bdev_raid_set_options",
00:09:11.398    "bdev_raid_remove_base_bdev",
00:09:11.398    "bdev_raid_add_base_bdev",
00:09:11.398    "bdev_raid_delete",
00:09:11.398    "bdev_raid_create",
00:09:11.398    "bdev_raid_get_bdevs",
00:09:11.398    "bdev_lvol_set_parent_bdev",
00:09:11.398    "bdev_lvol_set_parent",
00:09:11.398    "bdev_lvol_check_shallow_copy",
00:09:11.398    "bdev_lvol_start_shallow_copy",
00:09:11.398    "bdev_lvol_grow_lvstore",
00:09:11.398    "bdev_lvol_get_lvols",
00:09:11.398    "bdev_lvol_get_lvstores",
00:09:11.398    "bdev_lvol_delete",
00:09:11.398    "bdev_lvol_set_read_only",
00:09:11.398    "bdev_lvol_resize",
00:09:11.398    "bdev_lvol_decouple_parent",
00:09:11.398    "bdev_lvol_inflate",
00:09:11.398    "bdev_lvol_rename",
00:09:11.398    "bdev_lvol_clone_bdev",
00:09:11.398    "bdev_lvol_clone",
00:09:11.398    "bdev_lvol_snapshot",
00:09:11.398    "bdev_lvol_create",
00:09:11.398    "bdev_lvol_delete_lvstore",
00:09:11.398    "bdev_lvol_rename_lvstore",
00:09:11.398    "bdev_lvol_create_lvstore",
00:09:11.398    "bdev_passthru_delete",
00:09:11.398    "bdev_passthru_create",
00:09:11.398    "bdev_nvme_cuse_unregister",
00:09:11.398    "bdev_nvme_cuse_register",
00:09:11.398    "bdev_opal_new_user",
00:09:11.398    "bdev_opal_set_lock_state",
00:09:11.398    "bdev_opal_delete",
00:09:11.398    "bdev_opal_get_info",
00:09:11.398    "bdev_opal_create",
00:09:11.398    "bdev_nvme_opal_revert",
00:09:11.398    "bdev_nvme_opal_init",
00:09:11.398    "bdev_nvme_send_cmd",
00:09:11.398    "bdev_nvme_set_keys",
00:09:11.398    "bdev_nvme_get_path_iostat",
00:09:11.398    "bdev_nvme_get_mdns_discovery_info",
00:09:11.398    "bdev_nvme_stop_mdns_discovery",
00:09:11.398    "bdev_nvme_start_mdns_discovery",
00:09:11.398    "bdev_nvme_set_multipath_policy",
00:09:11.398    "bdev_nvme_set_preferred_path",
00:09:11.398    "bdev_nvme_get_io_paths",
00:09:11.398    "bdev_nvme_remove_error_injection",
00:09:11.398    "bdev_nvme_add_error_injection",
00:09:11.398    "bdev_nvme_get_discovery_info",
00:09:11.398    "bdev_nvme_stop_discovery",
00:09:11.398    "bdev_nvme_start_discovery",
00:09:11.398    "bdev_nvme_get_controller_health_info",
00:09:11.398    "bdev_nvme_disable_controller",
00:09:11.398    "bdev_nvme_enable_controller",
00:09:11.398    "bdev_nvme_reset_controller",
00:09:11.398    "bdev_nvme_get_transport_statistics",
00:09:11.398    "bdev_nvme_apply_firmware",
00:09:11.398    "bdev_nvme_detach_controller",
00:09:11.398    "bdev_nvme_get_controllers",
00:09:11.398    "bdev_nvme_attach_controller",
00:09:11.398    "bdev_nvme_set_hotplug",
00:09:11.399    "bdev_nvme_set_options",
00:09:11.399    "bdev_null_resize",
00:09:11.399    "bdev_null_delete",
00:09:11.399    "bdev_null_create",
00:09:11.399    "bdev_malloc_delete",
00:09:11.399    "bdev_malloc_create"
00:09:11.399  ]
00:09:11.399   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:11.399   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:11.399   05:54:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 78925
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 78925 ']'
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 78925
00:09:11.399    05:54:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:11.399    05:54:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78925
00:09:11.399  killing process with pid 78925
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78925'
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 78925
00:09:11.399   05:54:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 78925
00:09:11.967  
00:09:11.967  real	0m1.228s
00:09:11.967  user	0m2.030s
00:09:11.967  sys	0m0.414s
00:09:11.967   05:54:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:11.967   05:54:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:11.967  ************************************
00:09:11.967  END TEST spdkcli_tcp
00:09:11.967  ************************************
00:09:11.967   05:54:32  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:11.967   05:54:32  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:11.967   05:54:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:11.967   05:54:32  -- common/autotest_common.sh@10 -- # set +x
00:09:11.967  ************************************
00:09:11.967  START TEST dpdk_mem_utility
00:09:11.967  ************************************
00:09:11.967   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:11.967  * Looking for test storage...
00:09:11.967  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:09:11.967    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:11.967     05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:11.967     05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:09:11.967    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:11.967     05:54:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:11.967    05:54:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:09:11.967    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:11.967    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:11.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.967  		--rc genhtml_branch_coverage=1
00:09:11.967  		--rc genhtml_function_coverage=1
00:09:11.967  		--rc genhtml_legend=1
00:09:11.967  		--rc geninfo_all_blocks=1
00:09:11.967  		--rc geninfo_unexecuted_blocks=1
00:09:11.967  		
00:09:11.967  		'
00:09:11.967    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:11.967  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.967  		--rc genhtml_branch_coverage=1
00:09:11.967  		--rc genhtml_function_coverage=1
00:09:11.967  		--rc genhtml_legend=1
00:09:11.967  		--rc geninfo_all_blocks=1
00:09:11.967  		--rc geninfo_unexecuted_blocks=1
00:09:11.967  		
00:09:11.967  		'
00:09:11.968    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:11.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.968  		--rc genhtml_branch_coverage=1
00:09:11.968  		--rc genhtml_function_coverage=1
00:09:11.968  		--rc genhtml_legend=1
00:09:11.968  		--rc geninfo_all_blocks=1
00:09:11.968  		--rc geninfo_unexecuted_blocks=1
00:09:11.968  		
00:09:11.968  		'
00:09:11.968    05:54:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:11.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:11.968  		--rc genhtml_branch_coverage=1
00:09:11.968  		--rc genhtml_function_coverage=1
00:09:11.968  		--rc genhtml_legend=1
00:09:11.968  		--rc geninfo_all_blocks=1
00:09:11.968  		--rc geninfo_unexecuted_blocks=1
00:09:11.968  		
00:09:11.968  		'
00:09:11.968   05:54:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:11.968   05:54:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=79012
00:09:11.968   05:54:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:11.968   05:54:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 79012
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 79012 ']'
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:11.968  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:11.968   05:54:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:12.227  [2024-11-18 05:54:33.006540] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:12.227  [2024-11-18 05:54:33.006734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79012 ]
00:09:12.227  [2024-11-18 05:54:33.164431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:12.227  [2024-11-18 05:54:33.187539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:12.485   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:12.485   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:09:12.485   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:12.485   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:12.485   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:12.485   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:12.485  {
00:09:12.485  "filename": "/tmp/spdk_mem_dump.txt"
00:09:12.485  }
00:09:12.485   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:12.485   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:12.485  DPDK memory size 810.000000 MiB in 1 heap(s)
00:09:12.485  1 heaps totaling size 810.000000 MiB
00:09:12.485    size:  810.000000 MiB heap id: 0
00:09:12.485  end heaps----------
00:09:12.485  9 mempools totaling size 595.772034 MiB
00:09:12.485    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:12.486    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:12.486    size:   92.545471 MiB name: bdev_io_79012
00:09:12.486    size:   50.003479 MiB name: msgpool_79012
00:09:12.486    size:   36.509338 MiB name: fsdev_io_79012
00:09:12.486    size:   21.763794 MiB name: PDU_Pool
00:09:12.486    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:12.486    size:    4.133484 MiB name: evtpool_79012
00:09:12.486    size:    0.026123 MiB name: Session_Pool
00:09:12.486  end mempools-------
00:09:12.486  6 memzones totaling size 4.142822 MiB
00:09:12.486    size:    1.000366 MiB name: RG_ring_0_79012
00:09:12.486    size:    1.000366 MiB name: RG_ring_1_79012
00:09:12.486    size:    1.000366 MiB name: RG_ring_4_79012
00:09:12.486    size:    1.000366 MiB name: RG_ring_5_79012
00:09:12.486    size:    0.125366 MiB name: RG_ring_2_79012
00:09:12.486    size:    0.015991 MiB name: RG_ring_3_79012
00:09:12.486  end memzones-------
00:09:12.486   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:09:12.745  heap id: 0 total size: 810.000000 MiB number of busy elements: 313 number of free elements: 15
00:09:12.745    list of free elements. size: 10.813232 MiB
00:09:12.745      element at address: 0x200018a00000 with size:    0.999878 MiB
00:09:12.745      element at address: 0x200018c00000 with size:    0.999878 MiB
00:09:12.745      element at address: 0x200031800000 with size:    0.994446 MiB
00:09:12.745      element at address: 0x200000400000 with size:    0.993958 MiB
00:09:12.745      element at address: 0x200008000000 with size:    0.959839 MiB
00:09:12.745      element at address: 0x200012c00000 with size:    0.954285 MiB
00:09:12.745      element at address: 0x200018e00000 with size:    0.936584 MiB
00:09:12.745      element at address: 0x200000200000 with size:    0.717346 MiB
00:09:12.745      element at address: 0x20001a600000 with size:    0.567688 MiB
00:09:12.745      element at address: 0x200003e00000 with size:    0.489075 MiB
00:09:12.745      element at address: 0x200000c00000 with size:    0.487000 MiB
00:09:12.745      element at address: 0x200019000000 with size:    0.485657 MiB
00:09:12.745      element at address: 0x200010600000 with size:    0.480103 MiB
00:09:12.745      element at address: 0x200027a00000 with size:    0.395752 MiB
00:09:12.745      element at address: 0x200000800000 with size:    0.351746 MiB
00:09:12.745    list of standard malloc elements. size: 199.267883 MiB
00:09:12.745      element at address: 0x2000081fff80 with size:  132.000122 MiB
00:09:12.745      element at address: 0x200003ffff80 with size:   64.000122 MiB
00:09:12.745      element at address: 0x200018afff80 with size:    1.000122 MiB
00:09:12.745      element at address: 0x200018cfff80 with size:    1.000122 MiB
00:09:12.745      element at address: 0x200018efff80 with size:    1.000122 MiB
00:09:12.745      element at address: 0x2000003d9f00 with size:    0.140747 MiB
00:09:12.745      element at address: 0x200018eeff00 with size:    0.062622 MiB
00:09:12.745      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:09:12.745      element at address: 0x200018eefdc0 with size:    0.000305 MiB
00:09:12.745      element at address: 0x2000002d7c40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000003d9e40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fe740 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fe800 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fe8c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fe980 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fea40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004feb00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004febc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fec80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fed40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fee00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004feec0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004fef80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff040 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff100 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff1c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff280 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff340 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff400 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff4c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff580 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff640 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff700 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff7c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff880 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ff940 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ffcc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a0c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a2c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a380 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a440 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a500 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a5c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a680 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a740 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a800 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a8c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085a980 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085aa40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085ab00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085abc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085ac80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085ad40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085ae00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085aec0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085af80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085b040 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20000085b100 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000008db3c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000008db5c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000008df880 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cac0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cb80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cc40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cd00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cdc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7ce80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7cf40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d000 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d0c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d180 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d240 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d300 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d3c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d480 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d540 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d600 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d6c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d780 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d840 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d900 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7d9c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7da80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7db40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7dc00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7dcc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7dd80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7de40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7df00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7dfc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e080 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e140 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e200 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e2c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e380 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e440 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e500 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e5c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e680 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e740 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e800 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e8c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7e980 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7ea40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7eb00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7ebc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7ec80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000cff000 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d340 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d400 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d4c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d580 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d640 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d700 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d7c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d880 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7d940 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7da00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003e7dac0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200003efdd80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000080fdd80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067ae80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067af40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b000 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b0c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b180 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b240 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b300 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b3c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b480 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b540 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b600 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001067b6c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000106fb980 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200012cf44c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200018eefc40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x200018eefd00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x2000190bc740 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691540 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691600 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a6916c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691780 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691840 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691900 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a6919c0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691a80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691b40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691c00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691cc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691d80 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691e40 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691f00 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a691fc0 with size:    0.000183 MiB
00:09:12.746      element at address: 0x20001a692080 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692140 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692200 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6922c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692380 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692440 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692500 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6925c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692680 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692740 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692800 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6928c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692980 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692a40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692b00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692bc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692c80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692d40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692e00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692ec0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a692f80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693040 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693100 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6931c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693280 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693340 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693400 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6934c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693580 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693640 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693700 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6937c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693880 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693940 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693a00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693ac0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693b80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693c40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693d00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693dc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693e80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a693f40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694000 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6940c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694180 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694240 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694300 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6943c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694480 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694540 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694600 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6946c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694780 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694840 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694900 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6949c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694a80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694b40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694c00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694cc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694d80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694e40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694f00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a694fc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a695080 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a695140 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a695200 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a6952c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a695380 with size:    0.000183 MiB
00:09:12.747      element at address: 0x20001a695440 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a65500 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a655c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c1c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c3c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c480 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c540 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c600 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c6c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c780 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c840 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c900 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6c9c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ca80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6cb40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6cc00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ccc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6cd80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ce40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6cf00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6cfc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d080 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d140 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d200 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d2c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d380 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d440 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d500 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d5c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d680 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d740 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d800 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d8c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6d980 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6da40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6db00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6dbc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6dc80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6dd40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6de00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6dec0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6df80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e040 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e100 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e1c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e280 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e340 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e400 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e4c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e580 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e640 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e700 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e7c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e880 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6e940 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ea00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6eac0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6eb80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ec40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ed00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6edc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ee80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ef40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f000 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f0c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f180 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f240 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f300 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f3c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f480 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f540 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f600 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f6c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f780 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f840 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f900 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6f9c0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fa80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fb40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fc00 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fcc0 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fd80 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6fe40 with size:    0.000183 MiB
00:09:12.747      element at address: 0x200027a6ff00 with size:    0.000183 MiB
00:09:12.747    list of memzone associated elements. size: 599.918884 MiB
00:09:12.748      element at address: 0x20001a695500 with size:  211.416748 MiB
00:09:12.748        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:12.748      element at address: 0x200027a6ffc0 with size:  157.562561 MiB
00:09:12.748        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:12.748      element at address: 0x200012df4780 with size:   92.045044 MiB
00:09:12.748        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_79012_0
00:09:12.748      element at address: 0x200000dff380 with size:   48.003052 MiB
00:09:12.748        associated memzone info: size:   48.002930 MiB name: MP_msgpool_79012_0
00:09:12.748      element at address: 0x2000107fdb80 with size:   36.008911 MiB
00:09:12.748        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_79012_0
00:09:12.748      element at address: 0x2000191be940 with size:   20.255554 MiB
00:09:12.748        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:12.748      element at address: 0x2000319feb40 with size:   18.005066 MiB
00:09:12.748        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:12.748      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:09:12.748        associated memzone info: size:    3.000122 MiB name: MP_evtpool_79012_0
00:09:12.748      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:09:12.748        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_79012
00:09:12.748      element at address: 0x2000002d7d00 with size:    1.008118 MiB
00:09:12.748        associated memzone info: size:    1.007996 MiB name: MP_evtpool_79012
00:09:12.748      element at address: 0x2000106fba40 with size:    1.008118 MiB
00:09:12.748        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:12.748      element at address: 0x2000190bc800 with size:    1.008118 MiB
00:09:12.748        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:12.748      element at address: 0x2000080fde40 with size:    1.008118 MiB
00:09:12.748        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:12.748      element at address: 0x200003efde40 with size:    1.008118 MiB
00:09:12.748        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:12.748      element at address: 0x200000cff180 with size:    1.000488 MiB
00:09:12.748        associated memzone info: size:    1.000366 MiB name: RG_ring_0_79012
00:09:12.748      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:09:12.748        associated memzone info: size:    1.000366 MiB name: RG_ring_1_79012
00:09:12.748      element at address: 0x200012cf4580 with size:    1.000488 MiB
00:09:12.748        associated memzone info: size:    1.000366 MiB name: RG_ring_4_79012
00:09:12.748      element at address: 0x2000318fe940 with size:    1.000488 MiB
00:09:12.748        associated memzone info: size:    1.000366 MiB name: RG_ring_5_79012
00:09:12.748      element at address: 0x20000085b1c0 with size:    0.500488 MiB
00:09:12.748        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_79012
00:09:12.748      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:09:12.748        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_79012
00:09:12.748      element at address: 0x20001067b780 with size:    0.500488 MiB
00:09:12.748        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:12.748      element at address: 0x200003e7db80 with size:    0.500488 MiB
00:09:12.748        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:12.748      element at address: 0x20001907c540 with size:    0.250488 MiB
00:09:12.748        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:12.748      element at address: 0x2000002b7a40 with size:    0.125488 MiB
00:09:12.748        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_79012
00:09:12.748      element at address: 0x2000008df940 with size:    0.125488 MiB
00:09:12.748        associated memzone info: size:    0.125366 MiB name: RG_ring_2_79012
00:09:12.748      element at address: 0x2000080f5b80 with size:    0.031738 MiB
00:09:12.748        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:12.748      element at address: 0x200027a65680 with size:    0.023743 MiB
00:09:12.748        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:12.748      element at address: 0x2000008db680 with size:    0.016113 MiB
00:09:12.748        associated memzone info: size:    0.015991 MiB name: RG_ring_3_79012
00:09:12.748      element at address: 0x200027a6b7c0 with size:    0.002441 MiB
00:09:12.748        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:12.748      element at address: 0x2000004ffb80 with size:    0.000305 MiB
00:09:12.748        associated memzone info: size:    0.000183 MiB name: MP_msgpool_79012
00:09:12.748      element at address: 0x2000008db480 with size:    0.000305 MiB
00:09:12.748        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_79012
00:09:12.748      element at address: 0x20000085a180 with size:    0.000305 MiB
00:09:12.748        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_79012
00:09:12.748      element at address: 0x200027a6c280 with size:    0.000305 MiB
00:09:12.748        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:12.748   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:12.748   05:54:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 79012
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 79012 ']'
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 79012
00:09:12.748    05:54:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:12.748    05:54:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79012
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:12.748  killing process with pid 79012
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79012'
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 79012
00:09:12.748   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 79012
00:09:13.007  
00:09:13.007  real	0m1.128s
00:09:13.007  user	0m1.140s
00:09:13.007  sys	0m0.415s
00:09:13.007   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:13.007   05:54:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:13.007  ************************************
00:09:13.007  END TEST dpdk_mem_utility
00:09:13.007  ************************************
00:09:13.007   05:54:33  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:13.007   05:54:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:13.007   05:54:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:13.007   05:54:33  -- common/autotest_common.sh@10 -- # set +x
00:09:13.007  ************************************
00:09:13.007  START TEST event
00:09:13.007  ************************************
00:09:13.007   05:54:33 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:13.007  * Looking for test storage...
00:09:13.266  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:13.266    05:54:33 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:13.266     05:54:33 event -- common/autotest_common.sh@1693 -- # lcov --version
00:09:13.266     05:54:33 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:13.266    05:54:34 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:13.266    05:54:34 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:13.266    05:54:34 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:13.266    05:54:34 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:13.266    05:54:34 event -- scripts/common.sh@336 -- # IFS=.-:
00:09:13.266    05:54:34 event -- scripts/common.sh@336 -- # read -ra ver1
00:09:13.266    05:54:34 event -- scripts/common.sh@337 -- # IFS=.-:
00:09:13.266    05:54:34 event -- scripts/common.sh@337 -- # read -ra ver2
00:09:13.266    05:54:34 event -- scripts/common.sh@338 -- # local 'op=<'
00:09:13.266    05:54:34 event -- scripts/common.sh@340 -- # ver1_l=2
00:09:13.266    05:54:34 event -- scripts/common.sh@341 -- # ver2_l=1
00:09:13.266    05:54:34 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:13.266    05:54:34 event -- scripts/common.sh@344 -- # case "$op" in
00:09:13.266    05:54:34 event -- scripts/common.sh@345 -- # : 1
00:09:13.266    05:54:34 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:13.266    05:54:34 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:13.266     05:54:34 event -- scripts/common.sh@365 -- # decimal 1
00:09:13.266     05:54:34 event -- scripts/common.sh@353 -- # local d=1
00:09:13.266     05:54:34 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:13.266     05:54:34 event -- scripts/common.sh@355 -- # echo 1
00:09:13.266    05:54:34 event -- scripts/common.sh@365 -- # ver1[v]=1
00:09:13.267     05:54:34 event -- scripts/common.sh@366 -- # decimal 2
00:09:13.267     05:54:34 event -- scripts/common.sh@353 -- # local d=2
00:09:13.267     05:54:34 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:13.267     05:54:34 event -- scripts/common.sh@355 -- # echo 2
00:09:13.267    05:54:34 event -- scripts/common.sh@366 -- # ver2[v]=2
00:09:13.267    05:54:34 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:13.267    05:54:34 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:13.267    05:54:34 event -- scripts/common.sh@368 -- # return 0
00:09:13.267    05:54:34 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:13.267    05:54:34 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:13.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.267  		--rc genhtml_branch_coverage=1
00:09:13.267  		--rc genhtml_function_coverage=1
00:09:13.267  		--rc genhtml_legend=1
00:09:13.267  		--rc geninfo_all_blocks=1
00:09:13.267  		--rc geninfo_unexecuted_blocks=1
00:09:13.267  		
00:09:13.267  		'
00:09:13.267    05:54:34 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:13.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.267  		--rc genhtml_branch_coverage=1
00:09:13.267  		--rc genhtml_function_coverage=1
00:09:13.267  		--rc genhtml_legend=1
00:09:13.267  		--rc geninfo_all_blocks=1
00:09:13.267  		--rc geninfo_unexecuted_blocks=1
00:09:13.267  		
00:09:13.267  		'
00:09:13.267    05:54:34 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:13.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.267  		--rc genhtml_branch_coverage=1
00:09:13.267  		--rc genhtml_function_coverage=1
00:09:13.267  		--rc genhtml_legend=1
00:09:13.267  		--rc geninfo_all_blocks=1
00:09:13.267  		--rc geninfo_unexecuted_blocks=1
00:09:13.267  		
00:09:13.267  		'
00:09:13.267    05:54:34 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:13.267  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:13.267  		--rc genhtml_branch_coverage=1
00:09:13.267  		--rc genhtml_function_coverage=1
00:09:13.267  		--rc genhtml_legend=1
00:09:13.267  		--rc geninfo_all_blocks=1
00:09:13.267  		--rc geninfo_unexecuted_blocks=1
00:09:13.267  		
00:09:13.267  		'
00:09:13.267   05:54:34 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:13.267    05:54:34 event -- bdev/nbd_common.sh@6 -- # set -e
00:09:13.267   05:54:34 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:13.267   05:54:34 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:09:13.267   05:54:34 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:13.267   05:54:34 event -- common/autotest_common.sh@10 -- # set +x
00:09:13.267  ************************************
00:09:13.267  START TEST event_perf
00:09:13.267  ************************************
00:09:13.267   05:54:34 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:13.267  Running I/O for 1 seconds...[2024-11-18 05:54:34.130489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:13.267  [2024-11-18 05:54:34.130868] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79085 ]
00:09:13.525  [2024-11-18 05:54:34.286261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:13.525  [2024-11-18 05:54:34.311425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:13.525  [2024-11-18 05:54:34.311587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.525  [2024-11-18 05:54:34.311634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:13.525  Running I/O for 1 seconds...[2024-11-18 05:54:34.311604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:14.461  
00:09:14.461  lcore  0:   181875
00:09:14.461  lcore  1:   181873
00:09:14.461  lcore  2:   181873
00:09:14.461  lcore  3:   181874
00:09:14.461  done.
00:09:14.461  ************************************
00:09:14.461  END TEST event_perf
00:09:14.461  ************************************
00:09:14.461  
00:09:14.461  real	0m1.264s
00:09:14.461  user	0m4.066s
00:09:14.461  sys	0m0.090s
00:09:14.461   05:54:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.461   05:54:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:09:14.461   05:54:35 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:14.461   05:54:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:14.461   05:54:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:14.461   05:54:35 event -- common/autotest_common.sh@10 -- # set +x
00:09:14.461  ************************************
00:09:14.461  START TEST event_reactor
00:09:14.461  ************************************
00:09:14.461   05:54:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:14.720  [2024-11-18 05:54:35.439853] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:14.720  [2024-11-18 05:54:35.440022] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79119 ]
00:09:14.720  [2024-11-18 05:54:35.590220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:14.720  [2024-11-18 05:54:35.611988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:16.106  test_start
00:09:16.106  oneshot
00:09:16.106  tick 100
00:09:16.106  tick 100
00:09:16.106  tick 250
00:09:16.106  tick 100
00:09:16.106  tick 100
00:09:16.106  tick 100
00:09:16.106  tick 250
00:09:16.106  tick 500
00:09:16.106  tick 100
00:09:16.106  tick 100
00:09:16.106  tick 250
00:09:16.106  tick 100
00:09:16.106  tick 100
00:09:16.106  test_end
00:09:16.106  
00:09:16.106  real	0m1.249s
00:09:16.106  user	0m1.094s
00:09:16.106  sys	0m0.054s
00:09:16.106   05:54:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:16.106  ************************************
00:09:16.106  END TEST event_reactor
00:09:16.106  ************************************
00:09:16.106   05:54:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:09:16.106   05:54:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:16.106   05:54:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:16.106   05:54:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:16.106   05:54:36 event -- common/autotest_common.sh@10 -- # set +x
00:09:16.106  ************************************
00:09:16.106  START TEST event_reactor_perf
00:09:16.106  ************************************
00:09:16.106   05:54:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:16.106  [2024-11-18 05:54:36.748419] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:16.106  [2024-11-18 05:54:36.748584] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79161 ]
00:09:16.106  [2024-11-18 05:54:36.893362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:16.106  [2024-11-18 05:54:36.914140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.049  test_start
00:09:17.049  test_end
00:09:17.049  Performance:   306586 events per second
00:09:17.049  
00:09:17.049  real	0m1.243s
00:09:17.049  user	0m1.085s
00:09:17.049  sys	0m0.057s
00:09:17.049   05:54:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:17.049   05:54:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:09:17.049  ************************************
00:09:17.049  END TEST event_reactor_perf
00:09:17.049  ************************************
00:09:17.049    05:54:38 event -- event/event.sh@49 -- # uname -s
00:09:17.049   05:54:38 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:09:17.049   05:54:38 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:17.049   05:54:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:17.049   05:54:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:17.049   05:54:38 event -- common/autotest_common.sh@10 -- # set +x
00:09:17.049  ************************************
00:09:17.049  START TEST event_scheduler
00:09:17.049  ************************************
00:09:17.049   05:54:38 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:17.308  * Looking for test storage...
00:09:17.308  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:09:17.308    05:54:38 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:17.308     05:54:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:09:17.308     05:54:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:17.308    05:54:38 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:09:17.308    05:54:38 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:17.309     05:54:38 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:17.309    05:54:38 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:09:17.309    05:54:38 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:17.309    05:54:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:17.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.309  		--rc genhtml_branch_coverage=1
00:09:17.309  		--rc genhtml_function_coverage=1
00:09:17.309  		--rc genhtml_legend=1
00:09:17.309  		--rc geninfo_all_blocks=1
00:09:17.309  		--rc geninfo_unexecuted_blocks=1
00:09:17.309  		
00:09:17.309  		'
00:09:17.309    05:54:38 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:17.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.309  		--rc genhtml_branch_coverage=1
00:09:17.309  		--rc genhtml_function_coverage=1
00:09:17.309  		--rc genhtml_legend=1
00:09:17.309  		--rc geninfo_all_blocks=1
00:09:17.309  		--rc geninfo_unexecuted_blocks=1
00:09:17.309  		
00:09:17.309  		'
00:09:17.309    05:54:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:17.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.309  		--rc genhtml_branch_coverage=1
00:09:17.309  		--rc genhtml_function_coverage=1
00:09:17.309  		--rc genhtml_legend=1
00:09:17.309  		--rc geninfo_all_blocks=1
00:09:17.309  		--rc geninfo_unexecuted_blocks=1
00:09:17.309  		
00:09:17.309  		'
00:09:17.309    05:54:38 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:17.309  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:17.309  		--rc genhtml_branch_coverage=1
00:09:17.309  		--rc genhtml_function_coverage=1
00:09:17.309  		--rc genhtml_legend=1
00:09:17.309  		--rc geninfo_all_blocks=1
00:09:17.309  		--rc geninfo_unexecuted_blocks=1
00:09:17.309  		
00:09:17.309  		'
00:09:17.309   05:54:38 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:09:17.309   05:54:38 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=79226
00:09:17.309   05:54:38 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:09:17.309   05:54:38 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:09:17.309   05:54:38 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 79226
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 79226 ']'
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:17.309  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:17.309   05:54:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:17.309  [2024-11-18 05:54:38.277607] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:17.309  [2024-11-18 05:54:38.277828] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79226 ]
00:09:17.568  [2024-11-18 05:54:38.440686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:17.568  [2024-11-18 05:54:38.471639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:17.568  [2024-11-18 05:54:38.471716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:17.568  [2024-11-18 05:54:38.471641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:17.568  [2024-11-18 05:54:38.471821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:09:18.504   05:54:39 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:18.504  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:18.504  POWER: Cannot set governor of lcore 0 to userspace
00:09:18.504  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:18.504  POWER: Cannot set governor of lcore 0 to performance
00:09:18.504  POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor
00:09:18.504  POWER: Cannot set governor of lcore 0 to userspace
00:09:18.504  GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:09:18.504  POWER: Unable to set Power Management Environment for lcore 0
00:09:18.504  [2024-11-18 05:54:39.274058] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0
00:09:18.504  [2024-11-18 05:54:39.274089] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0
00:09:18.504  [2024-11-18 05:54:39.274115] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:09:18.504  [2024-11-18 05:54:39.274510] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:09:18.504  [2024-11-18 05:54:39.274527] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:09:18.504  [2024-11-18 05:54:39.274545] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.504   05:54:39 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:18.504  [2024-11-18 05:54:39.331079] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.504   05:54:39 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:18.504   05:54:39 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:18.504  ************************************
00:09:18.504  START TEST scheduler_create_thread
00:09:18.505  ************************************
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  2
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  3
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  4
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  5
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  6
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  7
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  8
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  9
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505  10
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:18.505   05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:18.505    05:54:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:20.408    05:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:20.408   05:54:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:09:20.408   05:54:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:09:20.408   05:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:20.408   05:54:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:21.345  ************************************
00:09:21.345  END TEST scheduler_create_thread
00:09:21.345  ************************************
00:09:21.345   05:54:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:21.345  
00:09:21.345  real	0m2.616s
00:09:21.345  user	0m0.018s
00:09:21.345  sys	0m0.004s
00:09:21.345   05:54:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:21.345   05:54:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:21.345   05:54:42 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:09:21.345   05:54:42 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 79226
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 79226 ']'
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 79226
00:09:21.345    05:54:42 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:21.345    05:54:42 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79226
00:09:21.345  killing process with pid 79226
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79226'
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 79226
00:09:21.345   05:54:42 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 79226
00:09:21.603  [2024-11-18 05:54:42.439527] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:09:21.863  
00:09:21.863  real	0m4.612s
00:09:21.863  user	0m8.845s
00:09:21.863  sys	0m0.430s
00:09:21.863   05:54:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:21.863   05:54:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:21.863  ************************************
00:09:21.863  END TEST event_scheduler
00:09:21.863  ************************************
00:09:21.863   05:54:42 event -- event/event.sh@51 -- # modprobe -n nbd
00:09:21.863   05:54:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:09:21.863   05:54:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:21.863   05:54:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:21.863   05:54:42 event -- common/autotest_common.sh@10 -- # set +x
00:09:21.863  ************************************
00:09:21.863  START TEST app_repeat
00:09:21.863  ************************************
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=79321
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 79321'
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:09:21.863  Process app_repeat pid: 79321
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:21.863  spdk_app_start Round 0
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:09:21.863   05:54:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 79321 /var/tmp/spdk-nbd.sock
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 79321 ']'
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:21.863  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:21.863   05:54:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:21.863  [2024-11-18 05:54:42.742484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:21.863  [2024-11-18 05:54:42.742690] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79321 ]
00:09:22.121  [2024-11-18 05:54:42.899828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:22.121  [2024-11-18 05:54:42.924289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:22.121  [2024-11-18 05:54:42.924360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:22.121   05:54:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:22.121   05:54:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:22.121   05:54:42 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:22.380  Malloc0
00:09:22.380   05:54:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:22.638  Malloc1
00:09:22.638   05:54:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:22.638   05:54:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:22.897  /dev/nbd0
00:09:22.897    05:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:22.897   05:54:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:22.897  1+0 records in
00:09:22.897  1+0 records out
00:09:22.897  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002064 s, 19.8 MB/s
00:09:22.897    05:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:22.897   05:54:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:22.897   05:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:22.897   05:54:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:22.897   05:54:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:23.156  /dev/nbd1
00:09:23.156    05:54:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:23.156   05:54:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:23.156  1+0 records in
00:09:23.156  1+0 records out
00:09:23.156  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233404 s, 17.5 MB/s
00:09:23.156    05:54:44 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:23.156   05:54:44 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:23.156   05:54:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:23.156   05:54:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:23.156    05:54:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:23.156    05:54:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:23.156     05:54:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:23.415    05:54:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:23.415    {
00:09:23.415      "nbd_device": "/dev/nbd0",
00:09:23.415      "bdev_name": "Malloc0"
00:09:23.415    },
00:09:23.415    {
00:09:23.415      "nbd_device": "/dev/nbd1",
00:09:23.415      "bdev_name": "Malloc1"
00:09:23.415    }
00:09:23.415  ]'
00:09:23.415     05:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:23.415    {
00:09:23.415      "nbd_device": "/dev/nbd0",
00:09:23.415      "bdev_name": "Malloc0"
00:09:23.415    },
00:09:23.415    {
00:09:23.415      "nbd_device": "/dev/nbd1",
00:09:23.415      "bdev_name": "Malloc1"
00:09:23.415    }
00:09:23.415  ]'
00:09:23.415     05:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:23.415    05:54:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:23.415  /dev/nbd1'
00:09:23.415     05:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:23.415     05:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:23.415  /dev/nbd1'
00:09:23.415    05:54:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:23.415    05:54:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:23.415   05:54:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:23.415   05:54:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:23.415   05:54:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:23.675  256+0 records in
00:09:23.675  256+0 records out
00:09:23.675  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00770414 s, 136 MB/s
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:23.675  256+0 records in
00:09:23.675  256+0 records out
00:09:23.675  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270307 s, 38.8 MB/s
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:23.675  256+0 records in
00:09:23.675  256+0 records out
00:09:23.675  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0330193 s, 31.8 MB/s
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:23.675   05:54:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:23.935    05:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:23.935   05:54:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:24.194    05:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:24.194   05:54:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:24.194    05:54:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:24.194    05:54:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:24.194     05:54:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:24.452    05:54:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:24.452     05:54:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:24.452     05:54:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:24.452    05:54:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:24.452     05:54:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:24.452     05:54:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:24.452     05:54:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:24.452    05:54:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:24.452    05:54:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:24.452   05:54:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:24.452   05:54:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:24.452   05:54:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:24.452   05:54:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:24.710   05:54:45 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:24.969  [2024-11-18 05:54:45.699946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:24.969  [2024-11-18 05:54:45.722857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:24.969  [2024-11-18 05:54:45.722865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:24.969  [2024-11-18 05:54:45.757361] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:24.969  [2024-11-18 05:54:45.757511] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:28.256   05:54:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:28.256  spdk_app_start Round 1
00:09:28.256   05:54:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:09:28.256   05:54:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 79321 /var/tmp/spdk-nbd.sock
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 79321 ']'
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:28.256  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:28.256   05:54:48 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:28.256   05:54:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:28.256  Malloc0
00:09:28.256   05:54:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:28.514  Malloc1
00:09:28.514   05:54:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:28.514   05:54:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:28.772  /dev/nbd0
00:09:28.772    05:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:28.772   05:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:28.772  1+0 records in
00:09:28.772  1+0 records out
00:09:28.772  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195123 s, 21.0 MB/s
00:09:28.772    05:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:28.772   05:54:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:28.772   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:28.772   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:28.772   05:54:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:29.033  /dev/nbd1
00:09:29.033    05:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:29.033   05:54:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:29.033  1+0 records in
00:09:29.033  1+0 records out
00:09:29.033  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218027 s, 18.8 MB/s
00:09:29.033    05:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:29.033   05:54:49 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:29.033   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:29.033   05:54:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:29.033    05:54:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:29.033    05:54:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:29.033     05:54:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:29.292    05:54:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:29.292    {
00:09:29.292      "nbd_device": "/dev/nbd0",
00:09:29.292      "bdev_name": "Malloc0"
00:09:29.292    },
00:09:29.292    {
00:09:29.292      "nbd_device": "/dev/nbd1",
00:09:29.292      "bdev_name": "Malloc1"
00:09:29.292    }
00:09:29.292  ]'
00:09:29.292     05:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:29.292    {
00:09:29.292      "nbd_device": "/dev/nbd0",
00:09:29.292      "bdev_name": "Malloc0"
00:09:29.292    },
00:09:29.292    {
00:09:29.292      "nbd_device": "/dev/nbd1",
00:09:29.292      "bdev_name": "Malloc1"
00:09:29.292    }
00:09:29.292  ]'
00:09:29.292     05:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:29.292    05:54:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:29.292  /dev/nbd1'
00:09:29.292     05:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:29.292  /dev/nbd1'
00:09:29.292     05:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:29.292    05:54:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:29.292    05:54:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:29.292  256+0 records in
00:09:29.292  256+0 records out
00:09:29.292  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101051 s, 104 MB/s
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:29.292   05:54:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:29.551  256+0 records in
00:09:29.551  256+0 records out
00:09:29.551  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0281323 s, 37.3 MB/s
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:29.551  256+0 records in
00:09:29.551  256+0 records out
00:09:29.551  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303871 s, 34.5 MB/s
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:29.551   05:54:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:29.810    05:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:29.810   05:54:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:30.071    05:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:30.071   05:54:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:30.071    05:54:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:30.071    05:54:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:30.071     05:54:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:30.349    05:54:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:30.349     05:54:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:30.349     05:54:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:30.349    05:54:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:30.349     05:54:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:30.349     05:54:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:30.349     05:54:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:30.349    05:54:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:30.349    05:54:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:30.349   05:54:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:30.349   05:54:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:30.349   05:54:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:30.349   05:54:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:30.619   05:54:51 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:30.619  [2024-11-18 05:54:51.567081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:30.619  [2024-11-18 05:54:51.588363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:30.619  [2024-11-18 05:54:51.588369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:30.878  [2024-11-18 05:54:51.621385] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:30.878  [2024-11-18 05:54:51.621519] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:34.167   05:54:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:34.167  spdk_app_start Round 2
00:09:34.167   05:54:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:09:34.167   05:54:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 79321 /var/tmp/spdk-nbd.sock
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 79321 ']'
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:34.167  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:34.167   05:54:54 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:34.167   05:54:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:34.167  Malloc0
00:09:34.167   05:54:54 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:34.426  Malloc1
00:09:34.426   05:54:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:34.426   05:54:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:34.686  /dev/nbd0
00:09:34.686    05:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:34.686   05:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:34.686  1+0 records in
00:09:34.686  1+0 records out
00:09:34.686  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024713 s, 16.6 MB/s
00:09:34.686    05:54:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:34.686   05:54:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:34.686   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:34.686   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:34.686   05:54:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:34.945  /dev/nbd1
00:09:34.945    05:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:34.945   05:54:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:34.945  1+0 records in
00:09:34.945  1+0 records out
00:09:34.945  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193942 s, 21.1 MB/s
00:09:34.945    05:54:55 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:34.945   05:54:55 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:34.945   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:34.945   05:54:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:34.945    05:54:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:34.945    05:54:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:34.945     05:54:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:35.203    05:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:35.203    {
00:09:35.203      "nbd_device": "/dev/nbd0",
00:09:35.203      "bdev_name": "Malloc0"
00:09:35.203    },
00:09:35.203    {
00:09:35.203      "nbd_device": "/dev/nbd1",
00:09:35.203      "bdev_name": "Malloc1"
00:09:35.203    }
00:09:35.203  ]'
00:09:35.203     05:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:35.203    {
00:09:35.203      "nbd_device": "/dev/nbd0",
00:09:35.203      "bdev_name": "Malloc0"
00:09:35.203    },
00:09:35.203    {
00:09:35.203      "nbd_device": "/dev/nbd1",
00:09:35.203      "bdev_name": "Malloc1"
00:09:35.203    }
00:09:35.203  ]'
00:09:35.203     05:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:35.203    05:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:35.203  /dev/nbd1'
00:09:35.203     05:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:35.203  /dev/nbd1'
00:09:35.203     05:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:35.203    05:54:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:35.203    05:54:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:35.203  256+0 records in
00:09:35.203  256+0 records out
00:09:35.203  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00645859 s, 162 MB/s
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:35.203  256+0 records in
00:09:35.203  256+0 records out
00:09:35.203  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251943 s, 41.6 MB/s
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:35.203  256+0 records in
00:09:35.203  256+0 records out
00:09:35.203  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0333279 s, 31.5 MB/s
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:35.203   05:54:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:35.770    05:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:35.770    05:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:35.770   05:54:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:36.030   05:54:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:36.030    05:54:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:36.030    05:54:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:36.030     05:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:36.030    05:54:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:36.030     05:54:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:36.030     05:54:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:36.289    05:54:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:36.289     05:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:36.289     05:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:36.289     05:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:36.289    05:54:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:36.289    05:54:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:36.289   05:54:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:36.289   05:54:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:36.289   05:54:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:36.289   05:54:57 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:36.548   05:54:57 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:36.548  [2024-11-18 05:54:57.410850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:36.548  [2024-11-18 05:54:57.431679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:36.548  [2024-11-18 05:54:57.431682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:36.548  [2024-11-18 05:54:57.463279] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:36.548  [2024-11-18 05:54:57.463366] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:39.833  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:39.833   05:55:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 79321 /var/tmp/spdk-nbd.sock
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 79321 ']'
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:39.834   05:55:00 event.app_repeat -- event/event.sh@39 -- # killprocess 79321
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 79321 ']'
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 79321
00:09:39.834    05:55:00 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:39.834    05:55:00 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79321
00:09:39.834  killing process with pid 79321
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79321'
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@973 -- # kill 79321
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@978 -- # wait 79321
00:09:39.834  spdk_app_start is called in Round 0.
00:09:39.834  Shutdown signal received, stop current app iteration
00:09:39.834  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization...
00:09:39.834  spdk_app_start is called in Round 1.
00:09:39.834  Shutdown signal received, stop current app iteration
00:09:39.834  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization...
00:09:39.834  spdk_app_start is called in Round 2.
00:09:39.834  Shutdown signal received, stop current app iteration
00:09:39.834  Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 reinitialization...
00:09:39.834  spdk_app_start is called in Round 3.
00:09:39.834  Shutdown signal received, stop current app iteration
00:09:39.834   05:55:00 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:09:39.834   05:55:00 event.app_repeat -- event/event.sh@42 -- # return 0
00:09:39.834  
00:09:39.834  real	0m18.046s
00:09:39.834  user	0m41.234s
00:09:39.834  sys	0m2.613s
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:39.834   05:55:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:39.834  ************************************
00:09:39.834  END TEST app_repeat
00:09:39.834  ************************************
00:09:39.834   05:55:00 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:09:39.834   05:55:00 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:09:39.834   05:55:00 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:39.834   05:55:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:39.834   05:55:00 event -- common/autotest_common.sh@10 -- # set +x
00:09:39.834  ************************************
00:09:39.834  START TEST cpu_locks
00:09:39.834  ************************************
00:09:39.834   05:55:00 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:09:40.093  * Looking for test storage...
00:09:40.093  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:40.093    05:55:00 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:40.093     05:55:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:40.093     05:55:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:09:40.093    05:55:00 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:09:40.093    05:55:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:40.094     05:55:00 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:40.094    05:55:00 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:09:40.094    05:55:00 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:40.094    05:55:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:40.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:40.094  		--rc genhtml_branch_coverage=1
00:09:40.094  		--rc genhtml_function_coverage=1
00:09:40.094  		--rc genhtml_legend=1
00:09:40.094  		--rc geninfo_all_blocks=1
00:09:40.094  		--rc geninfo_unexecuted_blocks=1
00:09:40.094  		
00:09:40.094  		'
00:09:40.094    05:55:00 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:40.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:40.094  		--rc genhtml_branch_coverage=1
00:09:40.094  		--rc genhtml_function_coverage=1
00:09:40.094  		--rc genhtml_legend=1
00:09:40.094  		--rc geninfo_all_blocks=1
00:09:40.094  		--rc geninfo_unexecuted_blocks=1
00:09:40.094  		
00:09:40.094  		'
00:09:40.094    05:55:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:40.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:40.094  		--rc genhtml_branch_coverage=1
00:09:40.094  		--rc genhtml_function_coverage=1
00:09:40.094  		--rc genhtml_legend=1
00:09:40.094  		--rc geninfo_all_blocks=1
00:09:40.094  		--rc geninfo_unexecuted_blocks=1
00:09:40.094  		
00:09:40.094  		'
00:09:40.094    05:55:00 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:40.094  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:40.094  		--rc genhtml_branch_coverage=1
00:09:40.094  		--rc genhtml_function_coverage=1
00:09:40.094  		--rc genhtml_legend=1
00:09:40.094  		--rc geninfo_all_blocks=1
00:09:40.094  		--rc geninfo_unexecuted_blocks=1
00:09:40.094  		
00:09:40.094  		'
00:09:40.094   05:55:00 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:09:40.094   05:55:00 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:09:40.094   05:55:00 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:09:40.094   05:55:00 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:09:40.094   05:55:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:40.094   05:55:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:40.094   05:55:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:40.094  ************************************
00:09:40.094  START TEST default_locks
00:09:40.094  ************************************
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=79799
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 79799
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 79799 ']'
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:40.094  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:40.094   05:55:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:40.094  [2024-11-18 05:55:01.043933] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:40.094  [2024-11-18 05:55:01.044127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79799 ]
00:09:40.353  [2024-11-18 05:55:01.198358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:40.353  [2024-11-18 05:55:01.219522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:40.921   05:55:01 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:40.921   05:55:01 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:09:40.921   05:55:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 79799
00:09:40.921   05:55:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 79799
00:09:41.180   05:55:01 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 79799
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 79799 ']'
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 79799
00:09:41.439    05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:41.439    05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79799
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:41.439  killing process with pid 79799
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79799'
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 79799
00:09:41.439   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 79799
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 79799
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 79799
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:41.699    05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 79799
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 79799 ']'
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:41.699  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:41.699  ERROR: process (pid: 79799) is no longer running
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:41.699  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (79799) - No such process
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:41.699  ************************************
00:09:41.699  END TEST default_locks
00:09:41.699  ************************************
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:41.699  
00:09:41.699  real	0m1.607s
00:09:41.699  user	0m1.710s
00:09:41.699  sys	0m0.468s
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:41.699   05:55:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:09:41.699   05:55:02 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:09:41.699   05:55:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:41.699   05:55:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:41.699   05:55:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:41.699  ************************************
00:09:41.699  START TEST default_locks_via_rpc
00:09:41.699  ************************************
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:09:41.699  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=79854
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 79854
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 79854 ']'
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:41.699   05:55:02 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:41.959  [2024-11-18 05:55:02.697533] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:41.959  [2024-11-18 05:55:02.697740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79854 ]
00:09:41.959  [2024-11-18 05:55:02.852546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:41.959  [2024-11-18 05:55:02.874766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 79854
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 79854
00:09:42.218   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:42.478   05:55:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 79854
00:09:42.478   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 79854 ']'
00:09:42.478   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 79854
00:09:42.478    05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:09:42.478   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:42.478    05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79854
00:09:42.738  killing process with pid 79854
00:09:42.738   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:42.738   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:42.738   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79854'
00:09:42.738   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 79854
00:09:42.738   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 79854
00:09:42.997  ************************************
00:09:42.997  END TEST default_locks_via_rpc
00:09:42.997  ************************************
00:09:42.997  
00:09:42.997  real	0m1.123s
00:09:42.997  user	0m1.092s
00:09:42.997  sys	0m0.483s
00:09:42.997   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:42.997   05:55:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:42.997   05:55:03 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:09:42.997   05:55:03 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:42.997   05:55:03 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.997   05:55:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:42.997  ************************************
00:09:42.997  START TEST non_locking_app_on_locked_coremask
00:09:42.997  ************************************
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=79893
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 79893 /var/tmp/spdk.sock
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 79893 ']'
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.997  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.997   05:55:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:42.997  [2024-11-18 05:55:03.876697] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:42.997  [2024-11-18 05:55:03.876939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79893 ]
00:09:43.257  [2024-11-18 05:55:04.032355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.257  [2024-11-18 05:55:04.055111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=79900
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 79900 /var/tmp/spdk2.sock
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 79900 ']'
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:43.257  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:09:43.257   05:55:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:43.516  [2024-11-18 05:55:04.300884] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:43.516  [2024-11-18 05:55:04.301088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79900 ]
00:09:43.516  [2024-11-18 05:55:04.469916] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:43.516  [2024-11-18 05:55:04.469989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:43.776  [2024-11-18 05:55:04.512918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:44.344   05:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:44.344   05:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:44.344   05:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 79893
00:09:44.344   05:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 79893
00:09:44.344   05:55:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 79893
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 79893 ']'
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 79893
00:09:45.282    05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:45.282    05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79893
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:45.282  killing process with pid 79893
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79893'
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 79893
00:09:45.282   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 79893
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 79900
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 79900 ']'
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 79900
00:09:45.851    05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:45.851    05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79900
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:45.851  killing process with pid 79900
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79900'
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 79900
00:09:45.851   05:55:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 79900
00:09:46.110  
00:09:46.110  real	0m3.217s
00:09:46.110  user	0m3.585s
00:09:46.110  sys	0m1.056s
00:09:46.110   05:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:46.110   05:55:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:46.110  ************************************
00:09:46.110  END TEST non_locking_app_on_locked_coremask
00:09:46.110  ************************************
00:09:46.110   05:55:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:09:46.110   05:55:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:46.110   05:55:07 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:46.110   05:55:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:46.110  ************************************
00:09:46.110  START TEST locking_app_on_unlocked_coremask
00:09:46.110  ************************************
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=79965
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 79965 /var/tmp/spdk.sock
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 79965 ']'
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:46.110  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:46.110   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:46.405  [2024-11-18 05:55:07.148196] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:46.405  [2024-11-18 05:55:07.148437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79965 ]
00:09:46.405  [2024-11-18 05:55:07.303659] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:46.405  [2024-11-18 05:55:07.303743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:46.405  [2024-11-18 05:55:07.324951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=79979
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 79979 /var/tmp/spdk2.sock
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 79979 ']'
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:46.674  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:46.674   05:55:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:46.674  [2024-11-18 05:55:07.581233] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:46.674  [2024-11-18 05:55:07.581461] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79979 ]
00:09:46.933  [2024-11-18 05:55:07.754416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:46.933  [2024-11-18 05:55:07.795167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:47.869   05:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:47.869   05:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:47.869   05:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 79979
00:09:47.869   05:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:47.869   05:55:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 79979
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 79965
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 79965 ']'
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 79965
00:09:48.807    05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:48.807    05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79965
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:48.807  killing process with pid 79965
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79965'
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 79965
00:09:48.807   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 79965
00:09:49.067   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 79979
00:09:49.067   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 79979 ']'
00:09:49.067   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 79979
00:09:49.067    05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:49.067   05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:49.067    05:55:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79979
00:09:49.067   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:49.067  killing process with pid 79979
00:09:49.067   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:49.067   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79979'
00:09:49.067   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 79979
00:09:49.067   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 79979
00:09:49.326  
00:09:49.326  real	0m3.226s
00:09:49.326  user	0m3.626s
00:09:49.326  sys	0m1.052s
00:09:49.326   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:49.326   05:55:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:49.326  ************************************
00:09:49.326  END TEST locking_app_on_unlocked_coremask
00:09:49.326  ************************************
00:09:49.585   05:55:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:09:49.585   05:55:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:49.585   05:55:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:49.585   05:55:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:49.585  ************************************
00:09:49.585  START TEST locking_app_on_locked_coremask
00:09:49.585  ************************************
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=80037
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 80037 /var/tmp/spdk.sock
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 80037 ']'
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:49.585  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:49.585   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:49.585  [2024-11-18 05:55:10.421601] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:49.585  [2024-11-18 05:55:10.421816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80037 ]
00:09:49.845  [2024-11-18 05:55:10.575973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:49.845  [2024-11-18 05:55:10.596392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=80051
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 80051 /var/tmp/spdk2.sock
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 80051 /var/tmp/spdk2.sock
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:49.845    05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 80051 /var/tmp/spdk2.sock
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 80051 ']'
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:49.845  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:49.845   05:55:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:50.104  [2024-11-18 05:55:10.836714] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:50.104  [2024-11-18 05:55:10.836942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80051 ]
00:09:50.104  [2024-11-18 05:55:11.006279] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 80037 has claimed it.
00:09:50.104  [2024-11-18 05:55:11.006378] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:50.672  ERROR: process (pid: 80051) is no longer running
00:09:50.672  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (80051) - No such process
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 80037
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 80037
00:09:50.672   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:09:51.240   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 80037
00:09:51.240   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 80037 ']'
00:09:51.240   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 80037
00:09:51.240    05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:09:51.240   05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:51.240    05:55:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80037
00:09:51.240   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:51.240   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:51.240  killing process with pid 80037
00:09:51.240   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80037'
00:09:51.240   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 80037
00:09:51.240   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 80037
00:09:51.499  
00:09:51.499  real	0m1.930s
00:09:51.499  user	0m2.191s
00:09:51.499  sys	0m0.602s
00:09:51.499   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:51.499   05:55:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:51.499  ************************************
00:09:51.499  END TEST locking_app_on_locked_coremask
00:09:51.499  ************************************
00:09:51.499   05:55:12 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:09:51.499   05:55:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:51.499   05:55:12 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:51.499   05:55:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:51.499  ************************************
00:09:51.499  START TEST locking_overlapped_coremask
00:09:51.499  ************************************
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=80093
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 80093 /var/tmp/spdk.sock
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 80093 ']'
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:51.499  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:51.499   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:51.500   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:51.500   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:51.500  [2024-11-18 05:55:12.393316] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:51.500  [2024-11-18 05:55:12.393489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80093 ]
00:09:51.759  [2024-11-18 05:55:12.537153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:51.759  [2024-11-18 05:55:12.560584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:51.759  [2024-11-18 05:55:12.560678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.759  [2024-11-18 05:55:12.560826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=80104
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 80104 /var/tmp/spdk2.sock
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 80104 /var/tmp/spdk2.sock
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:51.759    05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 80104 /var/tmp/spdk2.sock
00:09:51.759   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 80104 ']'
00:09:52.018  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:52.018   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:52.018   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:52.018   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:52.018   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:52.018   05:55:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:52.018  [2024-11-18 05:55:12.799595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:52.018  [2024-11-18 05:55:12.799816] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80104 ]
00:09:52.018  [2024-11-18 05:55:12.973967] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 80093 has claimed it.
00:09:52.018  [2024-11-18 05:55:12.974052] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:09:52.586  ERROR: process (pid: 80104) is no longer running
00:09:52.586  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (80104) - No such process
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 80093
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 80093 ']'
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 80093
00:09:52.586    05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:52.586    05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80093
00:09:52.586  killing process with pid 80093
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80093'
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 80093
00:09:52.586   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 80093
00:09:52.845  
00:09:52.845  real	0m1.481s
00:09:52.845  user	0m3.976s
00:09:52.845  sys	0m0.407s
00:09:52.845   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:52.845  ************************************
00:09:52.845  END TEST locking_overlapped_coremask
00:09:52.845  ************************************
00:09:52.845   05:55:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:09:53.104   05:55:13 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:09:53.104   05:55:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:53.104   05:55:13 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:53.104   05:55:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:53.104  ************************************
00:09:53.104  START TEST locking_overlapped_coremask_via_rpc
00:09:53.104  ************************************
00:09:53.104   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:09:53.104   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=80150
00:09:53.104   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 80150 /var/tmp/spdk.sock
00:09:53.104   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 80150 ']'
00:09:53.104   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:09:53.105   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:53.105   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:53.105   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:53.105  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:53.105   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:53.105   05:55:13 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:53.105  [2024-11-18 05:55:13.924563] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:53.105  [2024-11-18 05:55:13.924738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80150 ]
00:09:53.105  [2024-11-18 05:55:14.072992] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:53.105  [2024-11-18 05:55:14.073063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:53.364  [2024-11-18 05:55:14.096658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:53.364  [2024-11-18 05:55:14.096745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:53.364  [2024-11-18 05:55:14.096883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=80156
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 80156 /var/tmp/spdk2.sock
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 80156 ']'
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:53.364  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:53.364   05:55:14 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:53.364  [2024-11-18 05:55:14.341714] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:53.364  [2024-11-18 05:55:14.341932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80156 ]
00:09:53.623  [2024-11-18 05:55:14.515988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:09:53.623  [2024-11-18 05:55:14.516066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:09:53.623  [2024-11-18 05:55:14.569437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:53.623  [2024-11-18 05:55:14.569459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:53.623  [2024-11-18 05:55:14.569545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:54.558    05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:54.558  [2024-11-18 05:55:15.270988] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 80150 has claimed it.
00:09:54.558  request:
00:09:54.558  {
00:09:54.558  "method": "framework_enable_cpumask_locks",
00:09:54.558  "req_id": 1
00:09:54.558  }
00:09:54.558  Got JSON-RPC error response
00:09:54.558  response:
00:09:54.558  {
00:09:54.558  "code": -32603,
00:09:54.558  "message": "Failed to claim CPU core: 2"
00:09:54.558  }
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 80150 /var/tmp/spdk.sock
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 80150 ']'
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:54.558  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:54.558   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:54.816   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 80156 /var/tmp/spdk2.sock
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 80156 ']'
00:09:54.817  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:54.817   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.075  ************************************
00:09:55.075  END TEST locking_overlapped_coremask_via_rpc
00:09:55.075  ************************************
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:09:55.075  
00:09:55.075  real	0m1.967s
00:09:55.075  user	0m1.149s
00:09:55.075  sys	0m0.141s
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.075   05:55:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:55.075   05:55:15 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:09:55.075   05:55:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 80150 ]]
00:09:55.075   05:55:15 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 80150
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 80150 ']'
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 80150
00:09:55.075    05:55:15 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:55.075    05:55:15 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80150
00:09:55.075  killing process with pid 80150
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:55.075   05:55:15 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80150'
00:09:55.076   05:55:15 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 80150
00:09:55.076   05:55:15 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 80150
00:09:55.334   05:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 80156 ]]
00:09:55.334   05:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 80156
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 80156 ']'
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 80156
00:09:55.334    05:55:16 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:55.334    05:55:16 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80156
00:09:55.334  killing process with pid 80156
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80156'
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 80156
00:09:55.334   05:55:16 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 80156
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 80150 ]]
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 80150
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 80150 ']'
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 80150
00:09:55.592  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80150) - No such process
00:09:55.592  Process with pid 80150 is not found
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 80150 is not found'
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 80156 ]]
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 80156
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 80156 ']'
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 80156
00:09:55.592  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (80156) - No such process
00:09:55.592  Process with pid 80156 is not found
00:09:55.592   05:55:16 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 80156 is not found'
00:09:55.592   05:55:16 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:09:55.850  ************************************
00:09:55.850  END TEST cpu_locks
00:09:55.850  ************************************
00:09:55.850  
00:09:55.850  real	0m15.777s
00:09:55.850  user	0m27.685s
00:09:55.850  sys	0m5.053s
00:09:55.850   05:55:16 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.850   05:55:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:09:55.850  
00:09:55.850  real	0m42.701s
00:09:55.850  user	1m24.222s
00:09:55.850  sys	0m8.580s
00:09:55.850   05:55:16 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:55.850   05:55:16 event -- common/autotest_common.sh@10 -- # set +x
00:09:55.850  ************************************
00:09:55.850  END TEST event
00:09:55.850  ************************************
00:09:55.850   05:55:16  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:55.850   05:55:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:55.850   05:55:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:55.850   05:55:16  -- common/autotest_common.sh@10 -- # set +x
00:09:55.850  ************************************
00:09:55.850  START TEST thread
00:09:55.850  ************************************
00:09:55.850   05:55:16 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:09:55.850  * Looking for test storage...
00:09:55.850  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:09:55.850    05:55:16 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:55.850     05:55:16 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:09:55.850     05:55:16 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:55.850    05:55:16 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:55.850    05:55:16 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:55.850    05:55:16 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:55.850    05:55:16 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:55.850    05:55:16 thread -- scripts/common.sh@336 -- # IFS=.-:
00:09:55.850    05:55:16 thread -- scripts/common.sh@336 -- # read -ra ver1
00:09:55.850    05:55:16 thread -- scripts/common.sh@337 -- # IFS=.-:
00:09:55.850    05:55:16 thread -- scripts/common.sh@337 -- # read -ra ver2
00:09:55.850    05:55:16 thread -- scripts/common.sh@338 -- # local 'op=<'
00:09:55.850    05:55:16 thread -- scripts/common.sh@340 -- # ver1_l=2
00:09:55.850    05:55:16 thread -- scripts/common.sh@341 -- # ver2_l=1
00:09:55.850    05:55:16 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:55.850    05:55:16 thread -- scripts/common.sh@344 -- # case "$op" in
00:09:55.850    05:55:16 thread -- scripts/common.sh@345 -- # : 1
00:09:55.850    05:55:16 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:55.850    05:55:16 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:55.850     05:55:16 thread -- scripts/common.sh@365 -- # decimal 1
00:09:56.109     05:55:16 thread -- scripts/common.sh@353 -- # local d=1
00:09:56.110     05:55:16 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:56.110     05:55:16 thread -- scripts/common.sh@355 -- # echo 1
00:09:56.110    05:55:16 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:09:56.110     05:55:16 thread -- scripts/common.sh@366 -- # decimal 2
00:09:56.110     05:55:16 thread -- scripts/common.sh@353 -- # local d=2
00:09:56.110     05:55:16 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:56.110     05:55:16 thread -- scripts/common.sh@355 -- # echo 2
00:09:56.110    05:55:16 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:09:56.110    05:55:16 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:56.110    05:55:16 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:56.110    05:55:16 thread -- scripts/common.sh@368 -- # return 0
00:09:56.110    05:55:16 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:56.110    05:55:16 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:56.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.110  		--rc genhtml_branch_coverage=1
00:09:56.110  		--rc genhtml_function_coverage=1
00:09:56.110  		--rc genhtml_legend=1
00:09:56.110  		--rc geninfo_all_blocks=1
00:09:56.110  		--rc geninfo_unexecuted_blocks=1
00:09:56.110  		
00:09:56.110  		'
00:09:56.110    05:55:16 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:56.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.110  		--rc genhtml_branch_coverage=1
00:09:56.110  		--rc genhtml_function_coverage=1
00:09:56.110  		--rc genhtml_legend=1
00:09:56.110  		--rc geninfo_all_blocks=1
00:09:56.110  		--rc geninfo_unexecuted_blocks=1
00:09:56.110  		
00:09:56.110  		'
00:09:56.110    05:55:16 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:56.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.110  		--rc genhtml_branch_coverage=1
00:09:56.110  		--rc genhtml_function_coverage=1
00:09:56.110  		--rc genhtml_legend=1
00:09:56.110  		--rc geninfo_all_blocks=1
00:09:56.110  		--rc geninfo_unexecuted_blocks=1
00:09:56.110  		
00:09:56.110  		'
00:09:56.110    05:55:16 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:56.110  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:56.110  		--rc genhtml_branch_coverage=1
00:09:56.110  		--rc genhtml_function_coverage=1
00:09:56.110  		--rc genhtml_legend=1
00:09:56.110  		--rc geninfo_all_blocks=1
00:09:56.110  		--rc geninfo_unexecuted_blocks=1
00:09:56.110  		
00:09:56.110  		'
00:09:56.110   05:55:16 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:56.110   05:55:16 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:56.110   05:55:16 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:56.110   05:55:16 thread -- common/autotest_common.sh@10 -- # set +x
00:09:56.110  ************************************
00:09:56.110  START TEST thread_poller_perf
00:09:56.110  ************************************
00:09:56.110   05:55:16 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:09:56.110  [2024-11-18 05:55:16.872914] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:56.110  [2024-11-18 05:55:16.873085] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80289 ]
00:09:56.110  [2024-11-18 05:55:17.024311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:56.110  [2024-11-18 05:55:17.046333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:56.110  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:09:57.487  
[2024-11-18T05:55:18.465Z]  ======================================
00:09:57.487  
[2024-11-18T05:55:18.465Z]  busy:2214101810 (cyc)
00:09:57.487  
[2024-11-18T05:55:18.465Z]  total_run_count: 317000
00:09:57.487  
[2024-11-18T05:55:18.465Z]  tsc_hz: 2200000000 (cyc)
00:09:57.487  
[2024-11-18T05:55:18.465Z]  ======================================
00:09:57.487  
[2024-11-18T05:55:18.465Z]  poller_cost: 6984 (cyc), 3174 (nsec)
00:09:57.487  
00:09:57.487  real	0m1.267s
00:09:57.487  user	0m1.108s
00:09:57.487  sys	0m0.059s
00:09:57.487   05:55:18 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:57.487  ************************************
00:09:57.487  END TEST thread_poller_perf
00:09:57.488  ************************************
00:09:57.488   05:55:18 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:57.488   05:55:18 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:57.488   05:55:18 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:09:57.488   05:55:18 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:57.488   05:55:18 thread -- common/autotest_common.sh@10 -- # set +x
00:09:57.488  ************************************
00:09:57.488  START TEST thread_poller_perf
00:09:57.488  ************************************
00:09:57.488   05:55:18 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:09:57.488  [2024-11-18 05:55:18.188004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:57.488  [2024-11-18 05:55:18.188205] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80324 ]
00:09:57.488  [2024-11-18 05:55:18.340912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:57.488  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:09:57.488  [2024-11-18 05:55:18.363069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:58.439  
[2024-11-18T05:55:19.417Z]  ======================================
00:09:58.439  
[2024-11-18T05:55:19.417Z]  busy:2203942363 (cyc)
00:09:58.439  
[2024-11-18T05:55:19.417Z]  total_run_count: 4077000
00:09:58.439  
[2024-11-18T05:55:19.417Z]  tsc_hz: 2200000000 (cyc)
00:09:58.439  
[2024-11-18T05:55:19.417Z]  ======================================
00:09:58.439  
[2024-11-18T05:55:19.417Z]  poller_cost: 540 (cyc), 245 (nsec)
00:09:58.705  
00:09:58.705  real	0m1.261s
00:09:58.705  user	0m1.097s
00:09:58.705  sys	0m0.063s
00:09:58.705   05:55:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:58.705  ************************************
00:09:58.705   05:55:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:09:58.705  END TEST thread_poller_perf
00:09:58.705  ************************************
00:09:58.705   05:55:19 thread -- thread/thread.sh@17 -- # [[ n != \y ]]
00:09:58.705   05:55:19 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:09:58.705   05:55:19 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:58.705   05:55:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:58.705   05:55:19 thread -- common/autotest_common.sh@10 -- # set +x
00:09:58.705  ************************************
00:09:58.705  START TEST thread_spdk_lock
00:09:58.705  ************************************
00:09:58.705   05:55:19 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:09:58.705  [2024-11-18 05:55:19.503613] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:58.705  [2024-11-18 05:55:19.503831] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80356 ]
00:09:58.705  [2024-11-18 05:55:19.662898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:58.963  [2024-11-18 05:55:19.687507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:58.963  [2024-11-18 05:55:19.687595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:59.531  [2024-11-18 05:55:20.221877] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:59.531  [2024-11-18 05:55:20.222000] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:09:59.531  [2024-11-18 05:55:20.222040] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x5fab091cb980
00:09:59.531  [2024-11-18 05:55:20.223199] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:59.531  [2024-11-18 05:55:20.223309] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:59.531  [2024-11-18 05:55:20.223356] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:09:59.531  Starting test contend
00:09:59.531    Worker    Delay  Wait us  Hold us Total us
00:09:59.531         0        3   113042   199569   312612
00:09:59.531         1        5    49828   300785   350614
00:09:59.531  PASS test contend
00:09:59.531  Starting test hold_by_poller
00:09:59.531  PASS test hold_by_poller
00:09:59.531  Starting test hold_by_message
00:09:59.531  PASS test hold_by_message
00:09:59.531  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:09:59.531     100014 assertions passed
00:09:59.531          0 assertions failed
00:09:59.531  
00:09:59.531  real	0m0.807s
00:09:59.531  user	0m1.168s
00:09:59.531  sys	0m0.071s
00:09:59.531   05:55:20 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.531  ************************************
00:09:59.531  END TEST thread_spdk_lock
00:09:59.531  ************************************
00:09:59.531   05:55:20 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x
00:09:59.531  
00:09:59.531  real	0m3.659s
00:09:59.531  user	0m3.522s
00:09:59.531  sys	0m0.369s
00:09:59.531   05:55:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:59.531   05:55:20 thread -- common/autotest_common.sh@10 -- # set +x
00:09:59.531  ************************************
00:09:59.531  END TEST thread
00:09:59.531  ************************************
00:09:59.531   05:55:20  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:09:59.531   05:55:20  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:09:59.531   05:55:20  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:59.531   05:55:20  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:59.531   05:55:20  -- common/autotest_common.sh@10 -- # set +x
00:09:59.531  ************************************
00:09:59.531  START TEST app_cmdline
00:09:59.531  ************************************
00:09:59.531   05:55:20 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:09:59.531  * Looking for test storage...
00:09:59.531  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:09:59.531    05:55:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:59.531     05:55:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:09:59.531     05:55:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@345 -- # : 1
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:59.791     05:55:20 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:59.791    05:55:20 app_cmdline -- scripts/common.sh@368 -- # return 0
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:59.791  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.791  		--rc genhtml_branch_coverage=1
00:09:59.791  		--rc genhtml_function_coverage=1
00:09:59.791  		--rc genhtml_legend=1
00:09:59.791  		--rc geninfo_all_blocks=1
00:09:59.791  		--rc geninfo_unexecuted_blocks=1
00:09:59.791  		
00:09:59.791  		'
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:59.791  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.791  		--rc genhtml_branch_coverage=1
00:09:59.791  		--rc genhtml_function_coverage=1
00:09:59.791  		--rc genhtml_legend=1
00:09:59.791  		--rc geninfo_all_blocks=1
00:09:59.791  		--rc geninfo_unexecuted_blocks=1
00:09:59.791  		
00:09:59.791  		'
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:59.791  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.791  		--rc genhtml_branch_coverage=1
00:09:59.791  		--rc genhtml_function_coverage=1
00:09:59.791  		--rc genhtml_legend=1
00:09:59.791  		--rc geninfo_all_blocks=1
00:09:59.791  		--rc geninfo_unexecuted_blocks=1
00:09:59.791  		
00:09:59.791  		'
00:09:59.791    05:55:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:59.791  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:59.791  		--rc genhtml_branch_coverage=1
00:09:59.791  		--rc genhtml_function_coverage=1
00:09:59.791  		--rc genhtml_legend=1
00:09:59.791  		--rc geninfo_all_blocks=1
00:09:59.791  		--rc geninfo_unexecuted_blocks=1
00:09:59.791  		
00:09:59.791  		'
00:09:59.791   05:55:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:09:59.791   05:55:20 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=80435
00:09:59.791   05:55:20 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:09:59.791   05:55:20 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 80435
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 80435 ']'
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:59.791  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:59.791   05:55:20 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:09:59.791  [2024-11-18 05:55:20.595163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:09:59.791  [2024-11-18 05:55:20.595356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80435 ]
00:09:59.791  [2024-11-18 05:55:20.746882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:00.050  [2024-11-18 05:55:20.769796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:00.050   05:55:20 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:00.050   05:55:20 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:10:00.050   05:55:20 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:10:00.309  {
00:10:00.309    "version": "SPDK v25.01-pre git sha1 83e8405e4",
00:10:00.309    "fields": {
00:10:00.309      "major": 25,
00:10:00.309      "minor": 1,
00:10:00.309      "patch": 0,
00:10:00.309      "suffix": "-pre",
00:10:00.309      "commit": "83e8405e4"
00:10:00.309    }
00:10:00.309  }
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:10:00.309    05:55:21 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:10:00.309    05:55:21 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:10:00.309    05:55:21 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:00.309    05:55:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:00.309    05:55:21 app_cmdline -- app/cmdline.sh@26 -- # sort
00:10:00.309    05:55:21 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:10:00.309   05:55:21 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.309    05:55:21 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.309    05:55:21 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:10:00.309   05:55:21 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:00.568  request:
00:10:00.568  {
00:10:00.568    "method": "env_dpdk_get_mem_stats",
00:10:00.568    "req_id": 1
00:10:00.568  }
00:10:00.568  Got JSON-RPC error response
00:10:00.568  response:
00:10:00.568  {
00:10:00.568    "code": -32601,
00:10:00.568    "message": "Method not found"
00:10:00.568  }
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:00.568   05:55:21 app_cmdline -- app/cmdline.sh@1 -- # killprocess 80435
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 80435 ']'
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 80435
00:10:00.568    05:55:21 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:00.568    05:55:21 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80435
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:00.568  killing process with pid 80435
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80435'
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@973 -- # kill 80435
00:10:00.568   05:55:21 app_cmdline -- common/autotest_common.sh@978 -- # wait 80435
00:10:01.137  
00:10:01.137  real	0m1.485s
00:10:01.137  user	0m1.823s
00:10:01.137  sys	0m0.425s
00:10:01.137   05:55:21 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:01.137   05:55:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:01.137  ************************************
00:10:01.137  END TEST app_cmdline
00:10:01.137  ************************************
00:10:01.137   05:55:21  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:10:01.137   05:55:21  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:01.137   05:55:21  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:01.137   05:55:21  -- common/autotest_common.sh@10 -- # set +x
00:10:01.137  ************************************
00:10:01.137  START TEST version
00:10:01.137  ************************************
00:10:01.137   05:55:21 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:10:01.137  * Looking for test storage...
00:10:01.137  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:10:01.137    05:55:21 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:01.137     05:55:21 version -- common/autotest_common.sh@1693 -- # lcov --version
00:10:01.137     05:55:21 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:01.137    05:55:22 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:01.137    05:55:22 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:01.137    05:55:22 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:01.137    05:55:22 version -- scripts/common.sh@336 -- # IFS=.-:
00:10:01.137    05:55:22 version -- scripts/common.sh@336 -- # read -ra ver1
00:10:01.137    05:55:22 version -- scripts/common.sh@337 -- # IFS=.-:
00:10:01.137    05:55:22 version -- scripts/common.sh@337 -- # read -ra ver2
00:10:01.137    05:55:22 version -- scripts/common.sh@338 -- # local 'op=<'
00:10:01.137    05:55:22 version -- scripts/common.sh@340 -- # ver1_l=2
00:10:01.137    05:55:22 version -- scripts/common.sh@341 -- # ver2_l=1
00:10:01.137    05:55:22 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:01.137    05:55:22 version -- scripts/common.sh@344 -- # case "$op" in
00:10:01.137    05:55:22 version -- scripts/common.sh@345 -- # : 1
00:10:01.137    05:55:22 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:01.137    05:55:22 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:01.137     05:55:22 version -- scripts/common.sh@365 -- # decimal 1
00:10:01.137     05:55:22 version -- scripts/common.sh@353 -- # local d=1
00:10:01.137     05:55:22 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.137     05:55:22 version -- scripts/common.sh@355 -- # echo 1
00:10:01.137    05:55:22 version -- scripts/common.sh@365 -- # ver1[v]=1
00:10:01.137     05:55:22 version -- scripts/common.sh@366 -- # decimal 2
00:10:01.137     05:55:22 version -- scripts/common.sh@353 -- # local d=2
00:10:01.137     05:55:22 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:01.137     05:55:22 version -- scripts/common.sh@355 -- # echo 2
00:10:01.137    05:55:22 version -- scripts/common.sh@366 -- # ver2[v]=2
00:10:01.137    05:55:22 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:01.137    05:55:22 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:01.137    05:55:22 version -- scripts/common.sh@368 -- # return 0
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:01.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.137  		--rc genhtml_branch_coverage=1
00:10:01.137  		--rc genhtml_function_coverage=1
00:10:01.137  		--rc genhtml_legend=1
00:10:01.137  		--rc geninfo_all_blocks=1
00:10:01.137  		--rc geninfo_unexecuted_blocks=1
00:10:01.137  		
00:10:01.137  		'
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:01.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.137  		--rc genhtml_branch_coverage=1
00:10:01.137  		--rc genhtml_function_coverage=1
00:10:01.137  		--rc genhtml_legend=1
00:10:01.137  		--rc geninfo_all_blocks=1
00:10:01.137  		--rc geninfo_unexecuted_blocks=1
00:10:01.137  		
00:10:01.137  		'
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:01.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.137  		--rc genhtml_branch_coverage=1
00:10:01.137  		--rc genhtml_function_coverage=1
00:10:01.137  		--rc genhtml_legend=1
00:10:01.137  		--rc geninfo_all_blocks=1
00:10:01.137  		--rc geninfo_unexecuted_blocks=1
00:10:01.137  		
00:10:01.137  		'
00:10:01.137    05:55:22 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:01.137  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.137  		--rc genhtml_branch_coverage=1
00:10:01.137  		--rc genhtml_function_coverage=1
00:10:01.137  		--rc genhtml_legend=1
00:10:01.137  		--rc geninfo_all_blocks=1
00:10:01.137  		--rc geninfo_unexecuted_blocks=1
00:10:01.137  		
00:10:01.137  		'
00:10:01.137    05:55:22 version -- app/version.sh@17 -- # get_header_version major
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # cut -f2
00:10:01.137    05:55:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # tr -d '"'
00:10:01.137   05:55:22 version -- app/version.sh@17 -- # major=25
00:10:01.137    05:55:22 version -- app/version.sh@18 -- # get_header_version minor
00:10:01.137    05:55:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # cut -f2
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # tr -d '"'
00:10:01.137   05:55:22 version -- app/version.sh@18 -- # minor=1
00:10:01.137    05:55:22 version -- app/version.sh@19 -- # get_header_version patch
00:10:01.137    05:55:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # tr -d '"'
00:10:01.137    05:55:22 version -- app/version.sh@14 -- # cut -f2
00:10:01.137   05:55:22 version -- app/version.sh@19 -- # patch=0
00:10:01.137    05:55:22 version -- app/version.sh@20 -- # get_header_version suffix
00:10:01.138    05:55:22 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:01.138    05:55:22 version -- app/version.sh@14 -- # cut -f2
00:10:01.138    05:55:22 version -- app/version.sh@14 -- # tr -d '"'
00:10:01.138   05:55:22 version -- app/version.sh@20 -- # suffix=-pre
00:10:01.138   05:55:22 version -- app/version.sh@22 -- # version=25.1
00:10:01.138   05:55:22 version -- app/version.sh@25 -- # (( patch != 0 ))
00:10:01.138   05:55:22 version -- app/version.sh@28 -- # version=25.1rc0
00:10:01.138   05:55:22 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:10:01.138    05:55:22 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:10:01.397   05:55:22 version -- app/version.sh@30 -- # py_version=25.1rc0
00:10:01.397   05:55:22 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:10:01.397  
00:10:01.397  real	0m0.250s
00:10:01.397  user	0m0.160s
00:10:01.397  sys	0m0.135s
00:10:01.397   05:55:22 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:01.397   05:55:22 version -- common/autotest_common.sh@10 -- # set +x
00:10:01.397  ************************************
00:10:01.397  END TEST version
00:10:01.397  ************************************
00:10:01.397   05:55:22  -- spdk/autotest.sh@179 -- # '[' 1 -eq 1 ']'
00:10:01.397   05:55:22  -- spdk/autotest.sh@180 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:10:01.397   05:55:22  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:01.397   05:55:22  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:01.397   05:55:22  -- common/autotest_common.sh@10 -- # set +x
00:10:01.397  ************************************
00:10:01.397  START TEST blockdev_general
00:10:01.397  ************************************
00:10:01.397   05:55:22 blockdev_general -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:10:01.397  * Looking for test storage...
00:10:01.397  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:01.397     05:55:22 blockdev_general -- common/autotest_common.sh@1693 -- # lcov --version
00:10:01.397     05:55:22 blockdev_general -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@336 -- # IFS=.-:
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@336 -- # read -ra ver1
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@337 -- # IFS=.-:
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@337 -- # read -ra ver2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@338 -- # local 'op=<'
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@340 -- # ver1_l=2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@341 -- # ver2_l=1
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@344 -- # case "$op" in
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@345 -- # : 1
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@365 -- # decimal 1
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@353 -- # local d=1
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@355 -- # echo 1
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@365 -- # ver1[v]=1
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@366 -- # decimal 2
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@353 -- # local d=2
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:01.397     05:55:22 blockdev_general -- scripts/common.sh@355 -- # echo 2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@366 -- # ver2[v]=2
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:01.397    05:55:22 blockdev_general -- scripts/common.sh@368 -- # return 0
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:01.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.397  		--rc genhtml_branch_coverage=1
00:10:01.397  		--rc genhtml_function_coverage=1
00:10:01.397  		--rc genhtml_legend=1
00:10:01.397  		--rc geninfo_all_blocks=1
00:10:01.397  		--rc geninfo_unexecuted_blocks=1
00:10:01.397  		
00:10:01.397  		'
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:01.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.397  		--rc genhtml_branch_coverage=1
00:10:01.397  		--rc genhtml_function_coverage=1
00:10:01.397  		--rc genhtml_legend=1
00:10:01.397  		--rc geninfo_all_blocks=1
00:10:01.397  		--rc geninfo_unexecuted_blocks=1
00:10:01.397  		
00:10:01.397  		'
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:01.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.397  		--rc genhtml_branch_coverage=1
00:10:01.397  		--rc genhtml_function_coverage=1
00:10:01.397  		--rc genhtml_legend=1
00:10:01.397  		--rc geninfo_all_blocks=1
00:10:01.397  		--rc geninfo_unexecuted_blocks=1
00:10:01.397  		
00:10:01.397  		'
00:10:01.397    05:55:22 blockdev_general -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:01.397  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:01.397  		--rc genhtml_branch_coverage=1
00:10:01.397  		--rc genhtml_function_coverage=1
00:10:01.397  		--rc genhtml_legend=1
00:10:01.397  		--rc geninfo_all_blocks=1
00:10:01.397  		--rc geninfo_unexecuted_blocks=1
00:10:01.397  		
00:10:01.397  		'
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:10:01.397    05:55:22 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:10:01.397   05:55:22 blockdev_general -- bdev/blockdev.sh@20 -- # :
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:10:01.717    05:55:22 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device=
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@683 -- # dek=
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx=
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]]
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=80588
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 80588
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@835 -- # '[' -z 80588 ']'
00:10:01.717   05:55:22 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:01.717  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:01.717   05:55:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:01.717  [2024-11-18 05:55:22.447277] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:01.718  [2024-11-18 05:55:22.447471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80588 ]
00:10:01.718  [2024-11-18 05:55:22.602975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:01.718  [2024-11-18 05:55:22.626266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:01.718   05:55:22 blockdev_general -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:01.718   05:55:22 blockdev_general -- common/autotest_common.sh@868 -- # return 0
00:10:01.718   05:55:22 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:10:01.718   05:55:22 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf
00:10:01.718   05:55:22 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd
00:10:01.718   05:55:22 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:01.718   05:55:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:01.995  [2024-11-18 05:55:22.844179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:01.995  [2024-11-18 05:55:22.844272] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:01.995  
00:10:01.995  [2024-11-18 05:55:22.852119] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:01.995  [2024-11-18 05:55:22.852179] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:01.995  
00:10:01.995  Malloc0
00:10:01.995  Malloc1
00:10:01.995  Malloc2
00:10:01.995  Malloc3
00:10:01.995  Malloc4
00:10:01.995  Malloc5
00:10:01.995  Malloc6
00:10:01.995  Malloc7
00:10:02.254  Malloc8
00:10:02.254  Malloc9
00:10:02.254  [2024-11-18 05:55:22.993242] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:02.254  [2024-11-18 05:55:22.993357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:02.254  [2024-11-18 05:55:22.993407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x51600000b480
00:10:02.254  [2024-11-18 05:55:22.993439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:02.254  [2024-11-18 05:55:22.996236] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:02.254  [2024-11-18 05:55:22.996320] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:02.254  TestPT
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254   05:55:23 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:10:02.254  5000+0 records in
00:10:02.254  5000+0 records out
00:10:02.254  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0225497 s, 454 MB/s
00:10:02.254   05:55:23 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.254  AIO0
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254   05:55:23 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.254   05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254   05:55:23 blockdev_general -- bdev/blockdev.sh@739 -- # cat
00:10:02.254    05:55:23 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254    05:55:23 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254    05:55:23 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.254    05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.254   05:55:23 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:10:02.254    05:55:23 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:10:02.255    05:55:23 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:10:02.255    05:55:23 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:02.255    05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:02.515    05:55:23 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:02.515   05:55:23 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:10:02.515    05:55:23 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name
00:10:02.517    05:55:23 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "07969e35-2219-41cd-943b-1d22252bca2a"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "07969e35-2219-41cd-943b-1d22252bca2a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "881d8c1c-0727-5286-aaf5-163dd6273385"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "881d8c1c-0727-5286-aaf5-163dd6273385",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "12f4d107-489e-596c-be1c-fcbc59206a2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "12f4d107-489e-596c-be1c-fcbc59206a2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "bff30274-289d-5c2e-b86e-9be3f91da5ac"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "bff30274-289d-5c2e-b86e-9be3f91da5ac",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ec83597c-2a54-5449-830c-1db9d62af1b0"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ec83597c-2a54-5449-830c-1db9d62af1b0",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "c9af9a33-c842-5108-9280-18c8d2f0949c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c9af9a33-c842-5108-9280-18c8d2f0949c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "238b83fd-a6e3-59ef-8d98-b07387f117e7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "238b83fd-a6e3-59ef-8d98-b07387f117e7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "f4c19827-5a01-5ea2-b82f-614e10573e70"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f4c19827-5a01-5ea2-b82f-614e10573e70",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "758080c5-e734-55c6-82c2-625a5e6ba79c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "758080c5-e734-55c6-82c2-625a5e6ba79c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "7e39d8bb-12df-5901-afc2-cedd03fdb64b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7e39d8bb-12df-5901-afc2-cedd03fdb64b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "351029bb-918c-5b5a-914d-02a6f62695fb"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "351029bb-918c-5b5a-914d-02a6f62695fb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "d84166dd-d7b2-42ba-9cad-e47406906e67"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "0195adf0-3e38-4d5c-9837-7294d29d665b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "7dd9c048-1c30-44df-bde3-975ecaa7e783",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "af729de1-6d51-4b59-9ce0-16f4d36729d7"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "7065f660-48a2-4c51-8b1b-a491cd0dfe1b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "3367c1c5-a34a-4b52-bc0d-c826f0675e10",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "d43687e9-57de-41b6-9724-433d0adcea06"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "b71b1a3d-8efa-40a5-ab4c-c50f0a9ee06f",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "8dcd41cc-1bce-41af-a5f9-80f263f4b861",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "965c45d7-1a1c-4f3b-b997-aa6404f5a658"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "965c45d7-1a1c-4f3b-b997-aa6404f5a658",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:10:02.517   05:55:23 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:10:02.517   05:55:23 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0
00:10:02.517   05:55:23 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:10:02.517   05:55:23 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 80588
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@954 -- # '[' -z 80588 ']'
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@958 -- # kill -0 80588
00:10:02.517    05:55:23 blockdev_general -- common/autotest_common.sh@959 -- # uname
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:02.517    05:55:23 blockdev_general -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80588
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:02.517  killing process with pid 80588
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80588'
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@973 -- # kill 80588
00:10:02.517   05:55:23 blockdev_general -- common/autotest_common.sh@978 -- # wait 80588
00:10:03.110   05:55:23 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:10:03.110   05:55:23 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:10:03.110   05:55:23 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:10:03.110   05:55:23 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:03.110   05:55:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:03.110  ************************************
00:10:03.110  START TEST bdev_hello_world
00:10:03.110  ************************************
00:10:03.110   05:55:23 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:10:03.110  [2024-11-18 05:55:23.903868] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:03.110  [2024-11-18 05:55:23.904074] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80631 ]
00:10:03.110  [2024-11-18 05:55:24.058942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:03.110  [2024-11-18 05:55:24.081556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.369  [2024-11-18 05:55:24.192697] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:03.369  [2024-11-18 05:55:24.192847] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:03.369  [2024-11-18 05:55:24.200619] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:03.369  [2024-11-18 05:55:24.200709] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:03.369  [2024-11-18 05:55:24.208645] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:03.369  [2024-11-18 05:55:24.208718] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:03.369  [2024-11-18 05:55:24.208758] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:03.369  [2024-11-18 05:55:24.286621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:03.369  [2024-11-18 05:55:24.286746] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:03.369  [2024-11-18 05:55:24.286780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:10:03.369  [2024-11-18 05:55:24.286823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:03.369  [2024-11-18 05:55:24.289471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:03.369  [2024-11-18 05:55:24.289525] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:03.628  [2024-11-18 05:55:24.417567] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:10:03.628  [2024-11-18 05:55:24.417649] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:10:03.628  [2024-11-18 05:55:24.417744] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:10:03.628  [2024-11-18 05:55:24.417846] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:10:03.628  [2024-11-18 05:55:24.417923] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:10:03.628  [2024-11-18 05:55:24.417965] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:10:03.628  [2024-11-18 05:55:24.418032] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:10:03.628  
00:10:03.628  [2024-11-18 05:55:24.418083] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:10:03.887  
00:10:03.887  real	0m0.830s
00:10:03.887  user	0m0.491s
00:10:03.887  sys	0m0.214s
00:10:03.887   05:55:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:03.887   05:55:24 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:10:03.887  ************************************
00:10:03.887  END TEST bdev_hello_world
00:10:03.887  ************************************
00:10:03.887   05:55:24 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:10:03.887   05:55:24 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:03.887   05:55:24 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:03.887   05:55:24 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:03.887  ************************************
00:10:03.887  START TEST bdev_bounds
00:10:03.887  ************************************
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=80656
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:10:03.887  Process bdevio pid: 80656
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 80656'
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 80656
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 80656 ']'
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:03.887  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:03.887   05:55:24 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:10:03.887  [2024-11-18 05:55:24.805951] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:03.887  [2024-11-18 05:55:24.806944] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80656 ]
00:10:04.146  [2024-11-18 05:55:24.958905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:04.146  [2024-11-18 05:55:24.984989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:04.146  [2024-11-18 05:55:24.985065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:04.146  [2024-11-18 05:55:24.985140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:04.146  [2024-11-18 05:55:25.095290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:04.146  [2024-11-18 05:55:25.095604] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:04.146  [2024-11-18 05:55:25.103217] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:04.146  [2024-11-18 05:55:25.103409] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:04.146  [2024-11-18 05:55:25.111247] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:04.146  [2024-11-18 05:55:25.111473] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:04.146  [2024-11-18 05:55:25.111608] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:04.405  [2024-11-18 05:55:25.187154] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:04.405  [2024-11-18 05:55:25.187523] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:04.405  [2024-11-18 05:55:25.187679] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:10:04.405  [2024-11-18 05:55:25.187829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:04.405  [2024-11-18 05:55:25.190723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:04.405  [2024-11-18 05:55:25.190918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:04.972   05:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:04.972   05:55:25 blockdev_general.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:10:04.972   05:55:25 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:10:04.972  I/O targets:
00:10:04.972    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:10:04.972    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:10:04.972    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:10:04.972    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:10:04.972    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:10:04.972    raid0: 131072 blocks of 512 bytes (64 MiB)
00:10:04.972    concat0: 131072 blocks of 512 bytes (64 MiB)
00:10:04.972    raid1: 65536 blocks of 512 bytes (32 MiB)
00:10:04.972    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:10:04.972  
00:10:04.972  
00:10:04.972       CUnit - A unit testing framework for C - Version 2.1-3
00:10:04.972       http://cunit.sourceforge.net/
00:10:04.972  
00:10:04.972  
00:10:04.972  Suite: bdevio tests on: AIO0
00:10:04.972    Test: blockdev write read block ...passed
00:10:04.972    Test: blockdev write zeroes read block ...passed
00:10:04.972    Test: blockdev write zeroes read no split ...passed
00:10:04.972    Test: blockdev write zeroes read split ...passed
00:10:04.972    Test: blockdev write zeroes read split partial ...passed
00:10:04.972    Test: blockdev reset ...passed
00:10:04.972    Test: blockdev write read 8 blocks ...passed
00:10:04.972    Test: blockdev write read size > 128k ...passed
00:10:04.972    Test: blockdev write read invalid size ...passed
00:10:04.972    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:04.972    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:04.972    Test: blockdev write read max offset ...passed
00:10:04.972    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:04.972    Test: blockdev writev readv 8 blocks ...passed
00:10:04.972    Test: blockdev writev readv 30 x 1block ...passed
00:10:04.972    Test: blockdev writev readv block ...passed
00:10:04.972    Test: blockdev writev readv size > 128k ...passed
00:10:04.972    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:04.972    Test: blockdev comparev and writev ...passed
00:10:04.972    Test: blockdev nvme passthru rw ...passed
00:10:04.972    Test: blockdev nvme passthru vendor specific ...passed
00:10:04.972    Test: blockdev nvme admin passthru ...passed
00:10:04.972    Test: blockdev copy ...passed
00:10:04.972  Suite: bdevio tests on: raid1
00:10:04.972    Test: blockdev write read block ...passed
00:10:04.972    Test: blockdev write zeroes read block ...passed
00:10:04.972    Test: blockdev write zeroes read no split ...passed
00:10:04.972    Test: blockdev write zeroes read split ...passed
00:10:04.972    Test: blockdev write zeroes read split partial ...passed
00:10:04.972    Test: blockdev reset ...passed
00:10:04.972    Test: blockdev write read 8 blocks ...passed
00:10:04.972    Test: blockdev write read size > 128k ...passed
00:10:04.972    Test: blockdev write read invalid size ...passed
00:10:04.973    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:04.973    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:04.973    Test: blockdev write read max offset ...passed
00:10:04.973    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:04.973    Test: blockdev writev readv 8 blocks ...passed
00:10:04.973    Test: blockdev writev readv 30 x 1block ...passed
00:10:04.973    Test: blockdev writev readv block ...passed
00:10:04.973    Test: blockdev writev readv size > 128k ...passed
00:10:04.973    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:04.973    Test: blockdev comparev and writev ...passed
00:10:04.973    Test: blockdev nvme passthru rw ...passed
00:10:04.973    Test: blockdev nvme passthru vendor specific ...passed
00:10:04.973    Test: blockdev nvme admin passthru ...passed
00:10:04.973    Test: blockdev copy ...passed
00:10:04.973  Suite: bdevio tests on: concat0
00:10:04.973    Test: blockdev write read block ...passed
00:10:04.973    Test: blockdev write zeroes read block ...passed
00:10:04.973    Test: blockdev write zeroes read no split ...passed
00:10:04.973    Test: blockdev write zeroes read split ...passed
00:10:04.973    Test: blockdev write zeroes read split partial ...passed
00:10:04.973    Test: blockdev reset ...passed
00:10:04.973    Test: blockdev write read 8 blocks ...passed
00:10:04.973    Test: blockdev write read size > 128k ...passed
00:10:04.973    Test: blockdev write read invalid size ...passed
00:10:04.973    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:04.973    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:04.973    Test: blockdev write read max offset ...passed
00:10:04.973    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:04.973    Test: blockdev writev readv 8 blocks ...passed
00:10:04.973    Test: blockdev writev readv 30 x 1block ...passed
00:10:04.973    Test: blockdev writev readv block ...passed
00:10:04.973    Test: blockdev writev readv size > 128k ...passed
00:10:04.973    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:04.973    Test: blockdev comparev and writev ...passed
00:10:04.973    Test: blockdev nvme passthru rw ...passed
00:10:04.973    Test: blockdev nvme passthru vendor specific ...passed
00:10:04.973    Test: blockdev nvme admin passthru ...passed
00:10:04.973    Test: blockdev copy ...passed
00:10:04.973  Suite: bdevio tests on: raid0
00:10:04.973    Test: blockdev write read block ...passed
00:10:04.973    Test: blockdev write zeroes read block ...passed
00:10:04.973    Test: blockdev write zeroes read no split ...passed
00:10:04.973    Test: blockdev write zeroes read split ...passed
00:10:04.973    Test: blockdev write zeroes read split partial ...passed
00:10:04.973    Test: blockdev reset ...passed
00:10:04.973    Test: blockdev write read 8 blocks ...passed
00:10:04.973    Test: blockdev write read size > 128k ...passed
00:10:04.973    Test: blockdev write read invalid size ...passed
00:10:04.973    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:04.973    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:04.973    Test: blockdev write read max offset ...passed
00:10:04.973    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:04.973    Test: blockdev writev readv 8 blocks ...passed
00:10:04.973    Test: blockdev writev readv 30 x 1block ...passed
00:10:04.973    Test: blockdev writev readv block ...passed
00:10:04.973    Test: blockdev writev readv size > 128k ...passed
00:10:04.973    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:04.973    Test: blockdev comparev and writev ...passed
00:10:04.973    Test: blockdev nvme passthru rw ...passed
00:10:04.973    Test: blockdev nvme passthru vendor specific ...passed
00:10:04.973    Test: blockdev nvme admin passthru ...passed
00:10:04.973    Test: blockdev copy ...passed
00:10:04.973  Suite: bdevio tests on: TestPT
00:10:04.973    Test: blockdev write read block ...passed
00:10:04.973    Test: blockdev write zeroes read block ...passed
00:10:04.973    Test: blockdev write zeroes read no split ...passed
00:10:04.973    Test: blockdev write zeroes read split ...passed
00:10:05.232    Test: blockdev write zeroes read split partial ...passed
00:10:05.232    Test: blockdev reset ...passed
00:10:05.232    Test: blockdev write read 8 blocks ...passed
00:10:05.232    Test: blockdev write read size > 128k ...passed
00:10:05.232    Test: blockdev write read invalid size ...passed
00:10:05.232    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.232    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.232    Test: blockdev write read max offset ...passed
00:10:05.232    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.232    Test: blockdev writev readv 8 blocks ...passed
00:10:05.232    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.232    Test: blockdev writev readv block ...passed
00:10:05.232    Test: blockdev writev readv size > 128k ...passed
00:10:05.232    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.232    Test: blockdev comparev and writev ...passed
00:10:05.232    Test: blockdev nvme passthru rw ...passed
00:10:05.232    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.232    Test: blockdev nvme admin passthru ...passed
00:10:05.232    Test: blockdev copy ...passed
00:10:05.232  Suite: bdevio tests on: Malloc2p7
00:10:05.232    Test: blockdev write read block ...passed
00:10:05.232    Test: blockdev write zeroes read block ...passed
00:10:05.232    Test: blockdev write zeroes read no split ...passed
00:10:05.232    Test: blockdev write zeroes read split ...passed
00:10:05.232    Test: blockdev write zeroes read split partial ...passed
00:10:05.232    Test: blockdev reset ...passed
00:10:05.232    Test: blockdev write read 8 blocks ...passed
00:10:05.232    Test: blockdev write read size > 128k ...passed
00:10:05.232    Test: blockdev write read invalid size ...passed
00:10:05.232    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.232    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.232    Test: blockdev write read max offset ...passed
00:10:05.232    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.232    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p6
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p5
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p4
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p3
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p2
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p1
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.233    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.233    Test: blockdev comparev and writev ...passed
00:10:05.233    Test: blockdev nvme passthru rw ...passed
00:10:05.233    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.233    Test: blockdev nvme admin passthru ...passed
00:10:05.233    Test: blockdev copy ...passed
00:10:05.233  Suite: bdevio tests on: Malloc2p0
00:10:05.233    Test: blockdev write read block ...passed
00:10:05.233    Test: blockdev write zeroes read block ...passed
00:10:05.233    Test: blockdev write zeroes read no split ...passed
00:10:05.233    Test: blockdev write zeroes read split ...passed
00:10:05.233    Test: blockdev write zeroes read split partial ...passed
00:10:05.233    Test: blockdev reset ...passed
00:10:05.233    Test: blockdev write read 8 blocks ...passed
00:10:05.233    Test: blockdev write read size > 128k ...passed
00:10:05.233    Test: blockdev write read invalid size ...passed
00:10:05.233    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.233    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.233    Test: blockdev write read max offset ...passed
00:10:05.233    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.233    Test: blockdev writev readv 8 blocks ...passed
00:10:05.233    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.233    Test: blockdev writev readv block ...passed
00:10:05.233    Test: blockdev writev readv size > 128k ...passed
00:10:05.234    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.234    Test: blockdev comparev and writev ...passed
00:10:05.234    Test: blockdev nvme passthru rw ...passed
00:10:05.234    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.234    Test: blockdev nvme admin passthru ...passed
00:10:05.234    Test: blockdev copy ...passed
00:10:05.234  Suite: bdevio tests on: Malloc1p1
00:10:05.234    Test: blockdev write read block ...passed
00:10:05.234    Test: blockdev write zeroes read block ...passed
00:10:05.234    Test: blockdev write zeroes read no split ...passed
00:10:05.234    Test: blockdev write zeroes read split ...passed
00:10:05.234    Test: blockdev write zeroes read split partial ...passed
00:10:05.234    Test: blockdev reset ...passed
00:10:05.234    Test: blockdev write read 8 blocks ...passed
00:10:05.234    Test: blockdev write read size > 128k ...passed
00:10:05.234    Test: blockdev write read invalid size ...passed
00:10:05.234    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.234    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.234    Test: blockdev write read max offset ...passed
00:10:05.234    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.234    Test: blockdev writev readv 8 blocks ...passed
00:10:05.234    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.234    Test: blockdev writev readv block ...passed
00:10:05.234    Test: blockdev writev readv size > 128k ...passed
00:10:05.234    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.234    Test: blockdev comparev and writev ...passed
00:10:05.234    Test: blockdev nvme passthru rw ...passed
00:10:05.234    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.234    Test: blockdev nvme admin passthru ...passed
00:10:05.234    Test: blockdev copy ...passed
00:10:05.234  Suite: bdevio tests on: Malloc1p0
00:10:05.234    Test: blockdev write read block ...passed
00:10:05.234    Test: blockdev write zeroes read block ...passed
00:10:05.234    Test: blockdev write zeroes read no split ...passed
00:10:05.234    Test: blockdev write zeroes read split ...passed
00:10:05.234    Test: blockdev write zeroes read split partial ...passed
00:10:05.234    Test: blockdev reset ...passed
00:10:05.234    Test: blockdev write read 8 blocks ...passed
00:10:05.234    Test: blockdev write read size > 128k ...passed
00:10:05.234    Test: blockdev write read invalid size ...passed
00:10:05.234    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.234    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.234    Test: blockdev write read max offset ...passed
00:10:05.234    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.234    Test: blockdev writev readv 8 blocks ...passed
00:10:05.234    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.234    Test: blockdev writev readv block ...passed
00:10:05.234    Test: blockdev writev readv size > 128k ...passed
00:10:05.234    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.234    Test: blockdev comparev and writev ...passed
00:10:05.234    Test: blockdev nvme passthru rw ...passed
00:10:05.234    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.234    Test: blockdev nvme admin passthru ...passed
00:10:05.234    Test: blockdev copy ...passed
00:10:05.234  Suite: bdevio tests on: Malloc0
00:10:05.234    Test: blockdev write read block ...passed
00:10:05.234    Test: blockdev write zeroes read block ...passed
00:10:05.234    Test: blockdev write zeroes read no split ...passed
00:10:05.234    Test: blockdev write zeroes read split ...passed
00:10:05.234    Test: blockdev write zeroes read split partial ...passed
00:10:05.234    Test: blockdev reset ...passed
00:10:05.234    Test: blockdev write read 8 blocks ...passed
00:10:05.234    Test: blockdev write read size > 128k ...passed
00:10:05.234    Test: blockdev write read invalid size ...passed
00:10:05.234    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:05.234    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:05.234    Test: blockdev write read max offset ...passed
00:10:05.234    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:05.234    Test: blockdev writev readv 8 blocks ...passed
00:10:05.234    Test: blockdev writev readv 30 x 1block ...passed
00:10:05.234    Test: blockdev writev readv block ...passed
00:10:05.234    Test: blockdev writev readv size > 128k ...passed
00:10:05.234    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:05.234    Test: blockdev comparev and writev ...passed
00:10:05.234    Test: blockdev nvme passthru rw ...passed
00:10:05.234    Test: blockdev nvme passthru vendor specific ...passed
00:10:05.234    Test: blockdev nvme admin passthru ...passed
00:10:05.234    Test: blockdev copy ...passed
00:10:05.234  
00:10:05.234  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:10:05.234                suites     16     16    n/a      0        0
00:10:05.234                 tests    368    368    368      0        0
00:10:05.234               asserts   2224   2224   2224      0      n/a
00:10:05.234  
00:10:05.234  Elapsed time =    0.821 seconds
00:10:05.234  0
00:10:05.492   05:55:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 80656
00:10:05.492   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 80656 ']'
00:10:05.492   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 80656
00:10:05.492    05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:10:05.492   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:05.493    05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80656
00:10:05.493  killing process with pid 80656
00:10:05.493   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:05.493   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:05.493   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80656'
00:10:05.493   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # kill 80656
00:10:05.493   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@978 -- # wait 80656
00:10:05.752   05:55:26 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:10:05.752  
00:10:05.752  real	0m1.760s
00:10:05.752  user	0m4.368s
00:10:05.752  sys	0m0.441s
00:10:05.752   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:05.752   05:55:26 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:10:05.752  ************************************
00:10:05.752  END TEST bdev_bounds
00:10:05.752  ************************************
00:10:05.752   05:55:26 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:10:05.752   05:55:26 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:10:05.752   05:55:26 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:05.752   05:55:26 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:05.752  ************************************
00:10:05.752  START TEST bdev_nbd
00:10:05.752  ************************************
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:10:05.752    05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=80706
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 80706 /var/tmp/spdk-nbd.sock
00:10:05.752  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 80706 ']'
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:05.752   05:55:26 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:10:05.752  [2024-11-18 05:55:26.610289] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:05.752  [2024-11-18 05:55:26.610463] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:06.012  [2024-11-18 05:55:26.766844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:06.012  [2024-11-18 05:55:26.789505] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.012  [2024-11-18 05:55:26.897735] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:06.012  [2024-11-18 05:55:26.897862] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:06.012  [2024-11-18 05:55:26.905652] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:06.012  [2024-11-18 05:55:26.905933] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:06.012  [2024-11-18 05:55:26.913696] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:06.012  [2024-11-18 05:55:26.913956] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:06.012  [2024-11-18 05:55:26.913991] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:06.270  [2024-11-18 05:55:26.991382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:06.270  [2024-11-18 05:55:26.991472] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:06.270  [2024-11-18 05:55:26.991496] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:10:06.270  [2024-11-18 05:55:26.991509] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:06.270  [2024-11-18 05:55:26.994371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:06.270  [2024-11-18 05:55:26.994573] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:10:06.838   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:06.838    05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:10:07.098    05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:07.098  1+0 records in
00:10:07.098  1+0 records out
00:10:07.098  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000256837 s, 15.9 MB/s
00:10:07.098    05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:07.098   05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:07.098    05:55:27 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:10:07.357    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:07.357  1+0 records in
00:10:07.357  1+0 records out
00:10:07.357  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330926 s, 12.4 MB/s
00:10:07.357    05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:07.357   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:07.357    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:10:07.616    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:07.616  1+0 records in
00:10:07.616  1+0 records out
00:10:07.616  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035059 s, 11.7 MB/s
00:10:07.616    05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:07.616   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:07.616    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:10:07.875    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:07.875  1+0 records in
00:10:07.875  1+0 records out
00:10:07.875  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000337599 s, 12.1 MB/s
00:10:07.875    05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:07.875   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:07.875    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:10:08.134    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:08.134  1+0 records in
00:10:08.134  1+0 records out
00:10:08.134  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00343135 s, 1.2 MB/s
00:10:08.134    05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:08.134   05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:08.134    05:55:28 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:10:08.396    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:08.396  1+0 records in
00:10:08.396  1+0 records out
00:10:08.396  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000370357 s, 11.1 MB/s
00:10:08.396    05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:08.396   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:08.396    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:10:08.657    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:08.657  1+0 records in
00:10:08.657  1+0 records out
00:10:08.657  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442537 s, 9.3 MB/s
00:10:08.657    05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:08.657   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:08.657    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7
00:10:08.915    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:08.915  1+0 records in
00:10:08.915  1+0 records out
00:10:08.915  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000486503 s, 8.4 MB/s
00:10:08.915    05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:08.915   05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:08.915    05:55:29 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8
00:10:09.174    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:09.174  1+0 records in
00:10:09.174  1+0 records out
00:10:09.174  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457377 s, 9.0 MB/s
00:10:09.174    05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:09.174   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:09.174    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9
00:10:09.432    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:09.432  1+0 records in
00:10:09.432  1+0 records out
00:10:09.432  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000648903 s, 6.3 MB/s
00:10:09.432    05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:09.432   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:09.432    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10
00:10:09.691    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:09.691  1+0 records in
00:10:09.691  1+0 records out
00:10:09.691  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00063366 s, 6.5 MB/s
00:10:09.691    05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:09.691   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:09.691    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11
00:10:09.950    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:09.950  1+0 records in
00:10:09.950  1+0 records out
00:10:09.950  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000461519 s, 8.9 MB/s
00:10:09.950    05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:09.950   05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:09.950    05:55:30 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12
00:10:10.209    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:10.209   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:10.209  1+0 records in
00:10:10.209  1+0 records out
00:10:10.209  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603867 s, 6.8 MB/s
00:10:10.210    05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.210   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:10.210   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.468   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:10.468   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:10.468   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:10.468   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:10.468    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13
00:10:10.726    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:10.726  1+0 records in
00:10:10.726  1+0 records out
00:10:10.726  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00059736 s, 6.9 MB/s
00:10:10.726    05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:10.726   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:10.726    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14
00:10:10.985    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:10.985  1+0 records in
00:10:10.985  1+0 records out
00:10:10.985  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000752853 s, 5.4 MB/s
00:10:10.985    05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:10.985   05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:10.985    05:55:31 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15
00:10:11.244    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:11.244  1+0 records in
00:10:11.244  1+0 records out
00:10:11.244  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011666 s, 3.5 MB/s
00:10:11.244    05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:11.244   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:11.244    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd0",
00:10:11.503      "bdev_name": "Malloc0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd1",
00:10:11.503      "bdev_name": "Malloc1p0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd2",
00:10:11.503      "bdev_name": "Malloc1p1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd3",
00:10:11.503      "bdev_name": "Malloc2p0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd4",
00:10:11.503      "bdev_name": "Malloc2p1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd5",
00:10:11.503      "bdev_name": "Malloc2p2"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd6",
00:10:11.503      "bdev_name": "Malloc2p3"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd7",
00:10:11.503      "bdev_name": "Malloc2p4"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd8",
00:10:11.503      "bdev_name": "Malloc2p5"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd9",
00:10:11.503      "bdev_name": "Malloc2p6"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd10",
00:10:11.503      "bdev_name": "Malloc2p7"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd11",
00:10:11.503      "bdev_name": "TestPT"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd12",
00:10:11.503      "bdev_name": "raid0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd13",
00:10:11.503      "bdev_name": "concat0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd14",
00:10:11.503      "bdev_name": "raid1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd15",
00:10:11.503      "bdev_name": "AIO0"
00:10:11.503    }
00:10:11.503  ]'
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:10:11.503    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:10:11.503    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd0",
00:10:11.503      "bdev_name": "Malloc0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd1",
00:10:11.503      "bdev_name": "Malloc1p0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd2",
00:10:11.503      "bdev_name": "Malloc1p1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd3",
00:10:11.503      "bdev_name": "Malloc2p0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd4",
00:10:11.503      "bdev_name": "Malloc2p1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd5",
00:10:11.503      "bdev_name": "Malloc2p2"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd6",
00:10:11.503      "bdev_name": "Malloc2p3"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd7",
00:10:11.503      "bdev_name": "Malloc2p4"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd8",
00:10:11.503      "bdev_name": "Malloc2p5"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd9",
00:10:11.503      "bdev_name": "Malloc2p6"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd10",
00:10:11.503      "bdev_name": "Malloc2p7"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd11",
00:10:11.503      "bdev_name": "TestPT"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd12",
00:10:11.503      "bdev_name": "raid0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd13",
00:10:11.503      "bdev_name": "concat0"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd14",
00:10:11.503      "bdev_name": "raid1"
00:10:11.503    },
00:10:11.503    {
00:10:11.503      "nbd_device": "/dev/nbd15",
00:10:11.503      "bdev_name": "AIO0"
00:10:11.503    }
00:10:11.503  ]'
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15'
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15')
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:11.503   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:11.762    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:11.762   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:12.021    05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.021   05:55:32 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:10:12.279    05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.279   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:10:12.538    05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.538   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:10:12.797    05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:12.797   05:55:33 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:10:13.056    05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:13.056   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:10:13.623    05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:10:13.623   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:10:13.623   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:10:13.624    05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:13.624   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:10:13.883    05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:13.883   05:55:34 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:10:14.141    05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.141   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:10:14.399    05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.399   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:10:14.658    05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.658   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:10:14.917    05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:14.917   05:55:35 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:10:15.177    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:15.177   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:10:15.441    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:15.441   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:10:15.722    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:15.722   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:15.722    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:15.722    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.722     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:15.993    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:15.993     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:15.993     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:15.993    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:15.993     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:10:15.993     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:15.993     05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:10:15.993    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:10:15.993    05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:10:15.993   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:10:15.993   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:10:15.993   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:15.994   05:55:36 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:16.253  /dev/nbd0
00:10:16.512    05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:16.512  1+0 records in
00:10:16.512  1+0 records out
00:10:16.512  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196485 s, 20.8 MB/s
00:10:16.512    05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:16.512   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1
00:10:16.771  /dev/nbd1
00:10:16.771    05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:16.771  1+0 records in
00:10:16.771  1+0 records out
00:10:16.771  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248812 s, 16.5 MB/s
00:10:16.771    05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:16.771   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10
00:10:17.030  /dev/nbd10
00:10:17.030    05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:17.030  1+0 records in
00:10:17.030  1+0 records out
00:10:17.030  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309989 s, 13.2 MB/s
00:10:17.030    05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:17.030   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:17.031   05:55:37 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11
00:10:17.290  /dev/nbd11
00:10:17.290    05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:17.290  1+0 records in
00:10:17.290  1+0 records out
00:10:17.290  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414976 s, 9.9 MB/s
00:10:17.290    05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:17.290   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12
00:10:17.549  /dev/nbd12
00:10:17.550    05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:17.550  1+0 records in
00:10:17.550  1+0 records out
00:10:17.550  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437812 s, 9.4 MB/s
00:10:17.550    05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:17.550   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13
00:10:17.809  /dev/nbd13
00:10:17.809    05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:17.809  1+0 records in
00:10:17.809  1+0 records out
00:10:17.809  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409691 s, 10.0 MB/s
00:10:17.809    05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:17.809   05:55:38 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14
00:10:18.068  /dev/nbd14
00:10:18.069    05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:18.069  1+0 records in
00:10:18.069  1+0 records out
00:10:18.069  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421755 s, 9.7 MB/s
00:10:18.069    05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:18.069   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15
00:10:18.333  /dev/nbd15
00:10:18.333    05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:18.333  1+0 records in
00:10:18.333  1+0 records out
00:10:18.333  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350343 s, 11.7 MB/s
00:10:18.333    05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:18.333   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2
00:10:18.594  /dev/nbd2
00:10:18.594    05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:18.594  1+0 records in
00:10:18.594  1+0 records out
00:10:18.594  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000494757 s, 8.3 MB/s
00:10:18.594    05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:18.594   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3
00:10:18.852  /dev/nbd3
00:10:19.111    05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:19.111  1+0 records in
00:10:19.111  1+0 records out
00:10:19.111  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532879 s, 7.7 MB/s
00:10:19.111    05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:19.111   05:55:39 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4
00:10:19.111  /dev/nbd4
00:10:19.369    05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:19.369  1+0 records in
00:10:19.369  1+0 records out
00:10:19.369  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619386 s, 6.6 MB/s
00:10:19.369    05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:19.369   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5
00:10:19.369  /dev/nbd5
00:10:19.628    05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:19.628  1+0 records in
00:10:19.628  1+0 records out
00:10:19.628  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528594 s, 7.7 MB/s
00:10:19.628    05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:19.628   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6
00:10:19.628  /dev/nbd6
00:10:19.887    05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6
00:10:19.887   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6
00:10:19.887   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:10:19.887   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:19.888  1+0 records in
00:10:19.888  1+0 records out
00:10:19.888  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000577942 s, 7.1 MB/s
00:10:19.888    05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:19.888   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7
00:10:20.146  /dev/nbd7
00:10:20.146    05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:20.146  1+0 records in
00:10:20.146  1+0 records out
00:10:20.146  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000761823 s, 5.4 MB/s
00:10:20.146    05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:20.146   05:55:40 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8
00:10:20.406  /dev/nbd8
00:10:20.406    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:20.406  1+0 records in
00:10:20.406  1+0 records out
00:10:20.406  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667157 s, 6.1 MB/s
00:10:20.406    05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:20.406   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9
00:10:20.666  /dev/nbd9
00:10:20.666    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:20.666  1+0 records in
00:10:20.666  1+0 records out
00:10:20.666  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101067 s, 4.1 MB/s
00:10:20.666    05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:20.666   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:20.666    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:20.666    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:20.666     05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:20.926    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd0",
00:10:20.926      "bdev_name": "Malloc0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd1",
00:10:20.926      "bdev_name": "Malloc1p0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd10",
00:10:20.926      "bdev_name": "Malloc1p1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd11",
00:10:20.926      "bdev_name": "Malloc2p0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd12",
00:10:20.926      "bdev_name": "Malloc2p1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd13",
00:10:20.926      "bdev_name": "Malloc2p2"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd14",
00:10:20.926      "bdev_name": "Malloc2p3"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd15",
00:10:20.926      "bdev_name": "Malloc2p4"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd2",
00:10:20.926      "bdev_name": "Malloc2p5"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd3",
00:10:20.926      "bdev_name": "Malloc2p6"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd4",
00:10:20.926      "bdev_name": "Malloc2p7"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd5",
00:10:20.926      "bdev_name": "TestPT"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd6",
00:10:20.926      "bdev_name": "raid0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd7",
00:10:20.926      "bdev_name": "concat0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd8",
00:10:20.926      "bdev_name": "raid1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd9",
00:10:20.926      "bdev_name": "AIO0"
00:10:20.926    }
00:10:20.926  ]'
00:10:20.926     05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:20.926     05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd0",
00:10:20.926      "bdev_name": "Malloc0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd1",
00:10:20.926      "bdev_name": "Malloc1p0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd10",
00:10:20.926      "bdev_name": "Malloc1p1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd11",
00:10:20.926      "bdev_name": "Malloc2p0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd12",
00:10:20.926      "bdev_name": "Malloc2p1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd13",
00:10:20.926      "bdev_name": "Malloc2p2"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd14",
00:10:20.926      "bdev_name": "Malloc2p3"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd15",
00:10:20.926      "bdev_name": "Malloc2p4"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd2",
00:10:20.926      "bdev_name": "Malloc2p5"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd3",
00:10:20.926      "bdev_name": "Malloc2p6"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd4",
00:10:20.926      "bdev_name": "Malloc2p7"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd5",
00:10:20.926      "bdev_name": "TestPT"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd6",
00:10:20.926      "bdev_name": "raid0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd7",
00:10:20.926      "bdev_name": "concat0"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd8",
00:10:20.926      "bdev_name": "raid1"
00:10:20.926    },
00:10:20.926    {
00:10:20.926      "nbd_device": "/dev/nbd9",
00:10:20.926      "bdev_name": "AIO0"
00:10:20.926    }
00:10:20.926  ]'
00:10:20.926    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:20.926  /dev/nbd1
00:10:20.926  /dev/nbd10
00:10:20.926  /dev/nbd11
00:10:20.926  /dev/nbd12
00:10:20.927  /dev/nbd13
00:10:20.927  /dev/nbd14
00:10:20.927  /dev/nbd15
00:10:20.927  /dev/nbd2
00:10:20.927  /dev/nbd3
00:10:20.927  /dev/nbd4
00:10:20.927  /dev/nbd5
00:10:20.927  /dev/nbd6
00:10:20.927  /dev/nbd7
00:10:20.927  /dev/nbd8
00:10:20.927  /dev/nbd9'
00:10:20.927     05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:20.927  /dev/nbd1
00:10:20.927  /dev/nbd10
00:10:20.927  /dev/nbd11
00:10:20.927  /dev/nbd12
00:10:20.927  /dev/nbd13
00:10:20.927  /dev/nbd14
00:10:20.927  /dev/nbd15
00:10:20.927  /dev/nbd2
00:10:20.927  /dev/nbd3
00:10:20.927  /dev/nbd4
00:10:20.927  /dev/nbd5
00:10:20.927  /dev/nbd6
00:10:20.927  /dev/nbd7
00:10:20.927  /dev/nbd8
00:10:20.927  /dev/nbd9'
00:10:20.927     05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:20.927    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16
00:10:20.927    05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']'
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:10:20.927  256+0 records in
00:10:20.927  256+0 records out
00:10:20.927  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583255 s, 180 MB/s
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:20.927   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:21.187  256+0 records in
00:10:21.187  256+0 records out
00:10:21.187  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167468 s, 6.3 MB/s
00:10:21.187   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.187   05:55:41 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:21.187  256+0 records in
00:10:21.187  256+0 records out
00:10:21.187  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163131 s, 6.4 MB/s
00:10:21.187   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.187   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:10:21.446  256+0 records in
00:10:21.446  256+0 records out
00:10:21.446  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.169536 s, 6.2 MB/s
00:10:21.446   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.446   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:10:21.705  256+0 records in
00:10:21.705  256+0 records out
00:10:21.705  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159382 s, 6.6 MB/s
00:10:21.705   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.705   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:10:21.705  256+0 records in
00:10:21.705  256+0 records out
00:10:21.705  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165187 s, 6.3 MB/s
00:10:21.705   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.705   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:10:21.964  256+0 records in
00:10:21.964  256+0 records out
00:10:21.964  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.154178 s, 6.8 MB/s
00:10:21.964   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:21.964   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:10:22.222  256+0 records in
00:10:22.222  256+0 records out
00:10:22.222  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.165328 s, 6.3 MB/s
00:10:22.222   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.222   05:55:42 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct
00:10:22.222  256+0 records in
00:10:22.222  256+0 records out
00:10:22.222  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1711 s, 6.1 MB/s
00:10:22.222   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.222   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct
00:10:22.481  256+0 records in
00:10:22.481  256+0 records out
00:10:22.481  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167866 s, 6.2 MB/s
00:10:22.481   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.481   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct
00:10:22.740  256+0 records in
00:10:22.740  256+0 records out
00:10:22.740  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159733 s, 6.6 MB/s
00:10:22.740   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.740   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct
00:10:22.740  256+0 records in
00:10:22.740  256+0 records out
00:10:22.740  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.16426 s, 6.4 MB/s
00:10:22.740   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.740   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct
00:10:22.999  256+0 records in
00:10:22.999  256+0 records out
00:10:22.999  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168297 s, 6.2 MB/s
00:10:22.999   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:22.999   05:55:43 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct
00:10:23.258  256+0 records in
00:10:23.258  256+0 records out
00:10:23.258  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167278 s, 6.3 MB/s
00:10:23.258   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:23.258   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct
00:10:23.258  256+0 records in
00:10:23.258  256+0 records out
00:10:23.258  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168513 s, 6.2 MB/s
00:10:23.258   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:23.258   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct
00:10:23.517  256+0 records in
00:10:23.517  256+0 records out
00:10:23.517  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.182052 s, 5.8 MB/s
00:10:23.517   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:23.517   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct
00:10:23.777  256+0 records in
00:10:23.777  256+0 records out
00:10:23.777  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259695 s, 4.0 MB/s
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:23.777   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8
00:10:24.036   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:24.036   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9
00:10:24.036   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:24.036   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:24.037   05:55:44 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:24.296    05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:24.296   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:24.555    05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:24.555   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:10:24.815    05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:24.815   05:55:45 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:10:25.074    05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:25.074   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:10:25.333    05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:25.333   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:10:25.591    05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:25.591   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:10:25.850    05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:10:25.850   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:10:25.850   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:10:25.850   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:25.850   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:25.850   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:10:26.108   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:26.108   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:26.108   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:26.109   05:55:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:10:26.109    05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:26.109   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:10:26.367    05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:26.367   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:10:26.625    05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:26.625   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:10:26.885    05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:26.885   05:55:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:10:27.144    05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:10:27.144   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:10:27.144   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:10:27.144   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:27.144   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:27.144   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:10:27.404    05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:27.404   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:10:27.972    05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:10:27.972    05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:27.972   05:55:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:10:28.231    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:28.232   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:28.232    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:28.232    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:28.232     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:28.490    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:28.490     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:28.490     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:28.490    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:28.490     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:10:28.490     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:28.490     05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:10:28.490    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:10:28.490    05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:10:28.490   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:10:28.763  malloc_lvol_verify
00:10:28.763   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:10:29.038  43e6d981-ff41-49fd-ad57-954c9783900c
00:10:29.038   05:55:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:10:29.297  5d6dab8d-b6c5-426d-958d-70209565518e
00:10:29.297   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:10:29.556  /dev/nbd0
00:10:29.556   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:10:29.556   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:10:29.556   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:10:29.556   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:10:29.556   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:10:29.556  mke2fs 1.47.0 (5-Feb-2023)
00:10:29.556  
00:10:29.556  Filesystem too small for a journal
00:10:29.556  Discarding device blocks:    0/1024         done                            
00:10:29.556  Creating filesystem with 1024 4k blocks and 1024 inodes
00:10:29.556  
00:10:29.557  Allocating group tables: 0/1   done                            
00:10:29.557  Writing inode tables: 0/1   done                            
00:10:29.557  Writing superblocks and filesystem accounting information: 0/1   done
00:10:29.557  
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:29.557   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:29.816    05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 80706
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 80706 ']'
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 80706
00:10:29.816    05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:29.816    05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80706
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:29.816  killing process with pid 80706
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80706'
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # kill 80706
00:10:29.816   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@978 -- # wait 80706
00:10:30.075   05:55:50 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:10:30.075  
00:10:30.075  real	0m24.287s
00:10:30.075  user	0m34.735s
00:10:30.075  sys	0m9.212s
00:10:30.075   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:30.075   05:55:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:10:30.075  ************************************
00:10:30.075  END TEST bdev_nbd
00:10:30.075  ************************************
00:10:30.075   05:55:50 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:10:30.075   05:55:50 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']'
00:10:30.075   05:55:50 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']'
00:10:30.075   05:55:50 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite ''
00:10:30.075   05:55:50 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:30.075   05:55:50 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:30.075   05:55:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:30.075  ************************************
00:10:30.075  START TEST bdev_fio
00:10:30.075  ************************************
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:10:30.075  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:10:30.075    05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:10:30.075    05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:10:30.075   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:10:30.076    05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:30.076   05:55:50 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:10:30.076  ************************************
00:10:30.076  START TEST bdev_fio_rw_verify
00:10:30.076  ************************************
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:10:30.076   05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:10:30.076    05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:30.076    05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:10:30.076    05:55:50 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:10:30.076   05:55:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:10:30.076   05:55:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:10:30.076   05:55:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:10:30.076   05:55:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:10:30.076   05:55:51 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:30.335  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.335  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.335  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.335  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.335  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.335  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:30.336  fio-3.35
00:10:30.336  Starting 16 threads
00:10:42.537  
00:10:42.537  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=81827: Mon Nov 18 05:56:01 2024
00:10:42.537    read: IOPS=80.6k, BW=315MiB/s (330MB/s)(3151MiB/10001msec)
00:10:42.537      slat (usec): min=2, max=10037, avg=35.10, stdev=228.84
00:10:42.537      clat (usec): min=12, max=13434, avg=280.42, stdev=663.29
00:10:42.537       lat (usec): min=30, max=13453, avg=315.52, stdev=699.72
00:10:42.537      clat percentiles (usec):
00:10:42.537       | 50.000th=[  172], 99.000th=[ 4228], 99.900th=[ 7177], 99.990th=[ 8225],
00:10:42.537       | 99.999th=[11207]
00:10:42.537    write: IOPS=127k, BW=497MiB/s (521MB/s)(4909MiB/9883msec); 0 zone resets
00:10:42.538      slat (usec): min=5, max=22513, avg=60.72, stdev=313.71
00:10:42.538      clat (usec): min=9, max=22929, avg=364.93, stdev=769.87
00:10:42.538       lat (usec): min=45, max=22984, avg=425.65, stdev=827.69
00:10:42.538      clat percentiles (usec):
00:10:42.538       | 50.000th=[  221], 99.000th=[ 4293], 99.900th=[ 7308], 99.990th=[ 9372],
00:10:42.538       | 99.999th=[14746]
00:10:42.538     bw (  KiB/s): min=360033, max=774872, per=98.76%, avg=502340.84, stdev=7423.31, samples=304
00:10:42.538     iops        : min=90008, max=193718, avg=125584.89, stdev=1855.83, samples=304
00:10:42.538    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.25%, 100=12.45%, 250=56.62%
00:10:42.538    lat (usec)   : 500=26.18%, 750=1.12%, 1000=0.10%
00:10:42.538    lat (msec)   : 2=0.09%, 4=1.10%, 10=2.07%, 20=0.01%, 50=0.01%
00:10:42.538    cpu          : usr=57.92%, sys=2.61%, ctx=240577, majf=0, minf=114753
00:10:42.538    IO depths    : 1=11.1%, 2=23.8%, 4=52.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0%
00:10:42.538       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:42.538       complete  : 0=0.0%, 4=88.8%, 8=11.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:42.538       issued rwts: total=806541,1256686,0,0 short=0,0,0,0 dropped=0,0,0,0
00:10:42.538       latency   : target=0, window=0, percentile=100.00%, depth=8
00:10:42.538  
00:10:42.538  Run status group 0 (all jobs):
00:10:42.538     READ: bw=315MiB/s (330MB/s), 315MiB/s-315MiB/s (330MB/s-330MB/s), io=3151MiB (3304MB), run=10001-10001msec
00:10:42.538    WRITE: bw=497MiB/s (521MB/s), 497MiB/s-497MiB/s (521MB/s-521MB/s), io=4909MiB (5147MB), run=9883-9883msec
00:10:42.538  -----------------------------------------------------
00:10:42.538  Suppressions used:
00:10:42.538    count      bytes template
00:10:42.538       16        140 /usr/src/fio/parse.c
00:10:42.538    11246    1079616 /usr/src/fio/iolog.c
00:10:42.538        1        904 libcrypto.so
00:10:42.538  -----------------------------------------------------
00:10:42.538  
00:10:42.538  
00:10:42.538  real	0m11.640s
00:10:42.538  user	1m34.843s
00:10:42.538  sys	0m5.263s
00:10:42.538   05:56:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:42.538  ************************************
00:10:42.538  END TEST bdev_fio_rw_verify
00:10:42.538  ************************************
00:10:42.538   05:56:02 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:10:42.538   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:10:42.538    05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:10:42.539    05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "07969e35-2219-41cd-943b-1d22252bca2a"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "07969e35-2219-41cd-943b-1d22252bca2a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "881d8c1c-0727-5286-aaf5-163dd6273385"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "881d8c1c-0727-5286-aaf5-163dd6273385",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "12f4d107-489e-596c-be1c-fcbc59206a2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "12f4d107-489e-596c-be1c-fcbc59206a2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "bff30274-289d-5c2e-b86e-9be3f91da5ac"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "bff30274-289d-5c2e-b86e-9be3f91da5ac",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ec83597c-2a54-5449-830c-1db9d62af1b0"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ec83597c-2a54-5449-830c-1db9d62af1b0",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "c9af9a33-c842-5108-9280-18c8d2f0949c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c9af9a33-c842-5108-9280-18c8d2f0949c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "238b83fd-a6e3-59ef-8d98-b07387f117e7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "238b83fd-a6e3-59ef-8d98-b07387f117e7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "f4c19827-5a01-5ea2-b82f-614e10573e70"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f4c19827-5a01-5ea2-b82f-614e10573e70",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "758080c5-e734-55c6-82c2-625a5e6ba79c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "758080c5-e734-55c6-82c2-625a5e6ba79c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "7e39d8bb-12df-5901-afc2-cedd03fdb64b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7e39d8bb-12df-5901-afc2-cedd03fdb64b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "351029bb-918c-5b5a-914d-02a6f62695fb"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "351029bb-918c-5b5a-914d-02a6f62695fb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "d84166dd-d7b2-42ba-9cad-e47406906e67"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "0195adf0-3e38-4d5c-9837-7294d29d665b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "7dd9c048-1c30-44df-bde3-975ecaa7e783",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "af729de1-6d51-4b59-9ce0-16f4d36729d7"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "7065f660-48a2-4c51-8b1b-a491cd0dfe1b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "3367c1c5-a34a-4b52-bc0d-c826f0675e10",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "d43687e9-57de-41b6-9724-433d0adcea06"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "b71b1a3d-8efa-40a5-ab4c-c50f0a9ee06f",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "8dcd41cc-1bce-41af-a5f9-80f263f4b861",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "965c45d7-1a1c-4f3b-b997-aa6404f5a658"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "965c45d7-1a1c-4f3b-b997-aa6404f5a658",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:10:42.539   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0
00:10:42.539  Malloc1p0
00:10:42.539  Malloc1p1
00:10:42.539  Malloc2p0
00:10:42.539  Malloc2p1
00:10:42.539  Malloc2p2
00:10:42.539  Malloc2p3
00:10:42.539  Malloc2p4
00:10:42.539  Malloc2p5
00:10:42.539  Malloc2p6
00:10:42.539  Malloc2p7
00:10:42.539  TestPT
00:10:42.539  raid0
00:10:42.539  concat0 ]]
00:10:42.539    05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:10:42.541    05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "07969e35-2219-41cd-943b-1d22252bca2a"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "07969e35-2219-41cd-943b-1d22252bca2a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "881d8c1c-0727-5286-aaf5-163dd6273385"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "881d8c1c-0727-5286-aaf5-163dd6273385",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "12f4d107-489e-596c-be1c-fcbc59206a2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "12f4d107-489e-596c-be1c-fcbc59206a2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "bff30274-289d-5c2e-b86e-9be3f91da5ac"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "bff30274-289d-5c2e-b86e-9be3f91da5ac",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "ec83597c-2a54-5449-830c-1db9d62af1b0"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ec83597c-2a54-5449-830c-1db9d62af1b0",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "c9af9a33-c842-5108-9280-18c8d2f0949c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c9af9a33-c842-5108-9280-18c8d2f0949c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "238b83fd-a6e3-59ef-8d98-b07387f117e7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "238b83fd-a6e3-59ef-8d98-b07387f117e7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "f4c19827-5a01-5ea2-b82f-614e10573e70"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "f4c19827-5a01-5ea2-b82f-614e10573e70",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "758080c5-e734-55c6-82c2-625a5e6ba79c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "758080c5-e734-55c6-82c2-625a5e6ba79c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fc51651a-17c4-5e54-ab2b-ed7cfe2f9eb5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "7e39d8bb-12df-5901-afc2-cedd03fdb64b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "7e39d8bb-12df-5901-afc2-cedd03fdb64b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "351029bb-918c-5b5a-914d-02a6f62695fb"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "351029bb-918c-5b5a-914d-02a6f62695fb",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "d84166dd-d7b2-42ba-9cad-e47406906e67"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d84166dd-d7b2-42ba-9cad-e47406906e67",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "0195adf0-3e38-4d5c-9837-7294d29d665b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "7dd9c048-1c30-44df-bde3-975ecaa7e783",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "af729de1-6d51-4b59-9ce0-16f4d36729d7"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "af729de1-6d51-4b59-9ce0-16f4d36729d7",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "7065f660-48a2-4c51-8b1b-a491cd0dfe1b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "3367c1c5-a34a-4b52-bc0d-c826f0675e10",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "d43687e9-57de-41b6-9724-433d0adcea06"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "d43687e9-57de-41b6-9724-433d0adcea06",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "b71b1a3d-8efa-40a5-ab4c-c50f0a9ee06f",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "8dcd41cc-1bce-41af-a5f9-80f263f4b861",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "965c45d7-1a1c-4f3b-b997-aa6404f5a658"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "965c45d7-1a1c-4f3b-b997-aa6404f5a658",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:42.541   05:56:02 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:10:42.541  ************************************
00:10:42.541  START TEST bdev_fio_trim
00:10:42.541  ************************************
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local sanitizers
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # shift
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # local asan_lib=
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:10:42.541    05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:42.541    05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:10:42.541    05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libasan
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # break
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:10:42.541   05:56:02 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:42.541  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:42.541  fio-3.35
00:10:42.541  Starting 14 threads
00:10:54.776  
00:10:54.776  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=82004: Mon Nov 18 05:56:13 2024
00:10:54.776    write: IOPS=161k, BW=627MiB/s (658MB/s)(6275MiB/10004msec); 0 zone resets
00:10:54.776      slat (usec): min=3, max=8039, avg=31.58, stdev=191.74
00:10:54.776      clat (usec): min=18, max=10410, avg=226.29, stdev=541.84
00:10:54.776       lat (usec): min=32, max=10448, avg=257.87, stdev=573.42
00:10:54.776      clat percentiles (usec):
00:10:54.776       | 50.000th=[  149], 99.000th=[ 4178], 99.900th=[ 5997], 99.990th=[ 7242],
00:10:54.776       | 99.999th=[ 8225]
00:10:54.776     bw (  KiB/s): min=467996, max=847416, per=100.00%, avg=642723.47, stdev=9425.24, samples=266
00:10:54.776     iops        : min=116999, max=211853, avg=160680.63, stdev=2356.31, samples=266
00:10:54.776    trim: IOPS=161k, BW=627MiB/s (658MB/s)(6275MiB/10004msec); 0 zone resets
00:10:54.776      slat (usec): min=4, max=10044, avg=20.33, stdev=153.80
00:10:54.776      clat (usec): min=5, max=10449, avg=235.80, stdev=517.71
00:10:54.777       lat (usec): min=15, max=10482, avg=256.13, stdev=539.53
00:10:54.777      clat percentiles (usec):
00:10:54.777       | 50.000th=[  167], 99.000th=[ 4146], 99.900th=[ 6128], 99.990th=[ 7242],
00:10:54.777       | 99.999th=[ 8029]
00:10:54.777     bw (  KiB/s): min=468044, max=847416, per=100.00%, avg=642723.89, stdev=9425.89, samples=266
00:10:54.777     iops        : min=117011, max=211853, avg=160680.74, stdev=2356.47, samples=266
00:10:54.777    lat (usec)   : 10=0.04%, 20=0.14%, 50=0.61%, 100=12.05%, 250=80.71%
00:10:54.777    lat (usec)   : 500=4.60%, 750=0.06%, 1000=0.01%
00:10:54.777    lat (msec)   : 2=0.02%, 4=0.44%, 10=1.30%, 20=0.01%
00:10:54.777    cpu          : usr=68.82%, sys=0.87%, ctx=150652, majf=0, minf=23771
00:10:54.777    IO depths    : 1=12.4%, 2=24.7%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:10:54.777       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:54.777       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:10:54.777       issued rwts: total=0,1606312,1606315,0 short=0,0,0,0 dropped=0,0,0,0
00:10:54.777       latency   : target=0, window=0, percentile=100.00%, depth=8
00:10:54.777  
00:10:54.777  Run status group 0 (all jobs):
00:10:54.777    WRITE: bw=627MiB/s (658MB/s), 627MiB/s-627MiB/s (658MB/s-658MB/s), io=6275MiB (6579MB), run=10004-10004msec
00:10:54.777     TRIM: bw=627MiB/s (658MB/s), 627MiB/s-627MiB/s (658MB/s-658MB/s), io=6275MiB (6579MB), run=10004-10004msec
00:10:54.777  -----------------------------------------------------
00:10:54.777  Suppressions used:
00:10:54.777    count      bytes template
00:10:54.777       14        129 /usr/src/fio/parse.c
00:10:54.777        1        904 libcrypto.so
00:10:54.777  -----------------------------------------------------
00:10:54.777  
00:10:54.777  
00:10:54.777  real	0m11.332s
00:10:54.777  user	1m38.444s
00:10:54.777  sys	0m2.130s
00:10:54.777   05:56:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:54.777   05:56:14 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x
00:10:54.777  ************************************
00:10:54.777  END TEST bdev_fio_trim
00:10:54.777  ************************************
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:54.777  /home/vagrant/spdk_repo/spdk
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:10:54.777  
00:10:54.777  real	0m23.231s
00:10:54.777  user	3m13.386s
00:10:54.777  sys	0m7.525s
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:54.777   05:56:14 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:10:54.777  ************************************
00:10:54.777  END TEST bdev_fio
00:10:54.777  ************************************
00:10:54.777   05:56:14 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:10:54.777   05:56:14 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:10:54.777   05:56:14 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:10:54.777   05:56:14 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:54.777   05:56:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:54.777  ************************************
00:10:54.777  START TEST bdev_verify
00:10:54.777  ************************************
00:10:54.777   05:56:14 blockdev_general.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:10:54.777  [2024-11-18 05:56:14.225295] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:54.777  [2024-11-18 05:56:14.225486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82164 ]
00:10:54.777  [2024-11-18 05:56:14.376017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:54.777  [2024-11-18 05:56:14.397275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:54.777  [2024-11-18 05:56:14.397386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:54.777  [2024-11-18 05:56:14.504750] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:54.777  [2024-11-18 05:56:14.504850] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:54.777  [2024-11-18 05:56:14.512696] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:54.777  [2024-11-18 05:56:14.512745] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:54.777  [2024-11-18 05:56:14.520755] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:54.777  [2024-11-18 05:56:14.520819] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:54.777  [2024-11-18 05:56:14.520834] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:54.777  [2024-11-18 05:56:14.599480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:54.777  [2024-11-18 05:56:14.599580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:54.777  [2024-11-18 05:56:14.599610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:10:54.777  [2024-11-18 05:56:14.599641] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:54.777  [2024-11-18 05:56:14.602501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:54.777  [2024-11-18 05:56:14.602552] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:54.777  Running I/O for 5 seconds...
00:10:58.584      37655.00 IOPS,   147.09 MiB/s
[2024-11-18T05:56:20.129Z]     43904.00 IOPS,   171.50 MiB/s
[2024-11-18T05:56:20.129Z]     41940.67 IOPS,   163.83 MiB/s
00:10:59.151                                                                                                  Latency(us)
00:10:59.151  
[2024-11-18T05:56:20.129Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:10:59.151  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x0 length 0x1000
00:10:59.151  	 Malloc0             :       5.14    1394.53       5.45       0.00     0.00   91651.88     647.91  282162.27
00:10:59.151  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x1000 length 0x1000
00:10:59.151  	 Malloc0             :       5.14    1368.33       5.35       0.00     0.00   93400.02     573.44  305040.29
00:10:59.151  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x0 length 0x800
00:10:59.151  	 Malloc1p0           :       5.14     721.87       2.82       0.00     0.00  176735.14    2829.96  149660.39
00:10:59.151  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x800 length 0x800
00:10:59.151  	 Malloc1p0           :       5.19     714.85       2.79       0.00     0.00  178417.10    2844.86  166818.91
00:10:59.151  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x0 length 0x800
00:10:59.151  	 Malloc1p1           :       5.14     721.52       2.82       0.00     0.00  176465.66    2606.55  146800.64
00:10:59.151  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x800 length 0x800
00:10:59.151  	 Malloc1p1           :       5.19     714.55       2.79       0.00     0.00  178102.86    2636.33  164912.41
00:10:59.151  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x0 length 0x200
00:10:59.151  	 Malloc2p0           :       5.15     721.10       2.82       0.00     0.00  176226.51    2666.12  142987.64
00:10:59.151  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.151  	 Verification LBA range: start 0x200 length 0x200
00:10:59.152  	 Malloc2p0           :       5.20     714.21       2.79       0.00     0.00  177809.84    2472.49  162052.65
00:10:59.152  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x0 length 0x200
00:10:59.152  	 Malloc2p1           :       5.15     720.81       2.82       0.00     0.00  175959.17    2278.87  143940.89
00:10:59.152  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x200 length 0x200
00:10:59.152  	 Malloc2p1           :       5.20     713.75       2.79       0.00     0.00  177582.59    2398.02  160146.15
00:10:59.152  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x0 length 0x200
00:10:59.152  	 Malloc2p2           :       5.15     720.51       2.81       0.00     0.00  175726.37    2323.55  142987.64
00:10:59.152  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x200 length 0x200
00:10:59.152  	 Malloc2p2           :       5.20     713.45       2.79       0.00     0.00  177326.90    2457.60  158239.65
00:10:59.152  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x0 length 0x200
00:10:59.152  	 Malloc2p3           :       5.15     720.23       2.81       0.00     0.00  175498.39    2412.92  141081.13
00:10:59.152  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x200 length 0x200
00:10:59.152  	 Malloc2p3           :       5.20     713.18       2.79       0.00     0.00  177066.83    2532.07  157286.40
00:10:59.152  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x0 length 0x200
00:10:59.152  	 Malloc2p4           :       5.16     719.94       2.81       0.00     0.00  175252.50    2651.23  142034.39
00:10:59.152  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.152  	 Verification LBA range: start 0x200 length 0x200
00:10:59.152  	 Malloc2p4           :       5.21     712.90       2.78       0.00     0.00  176787.76    2755.49  154426.65
00:10:59.152  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x200
00:10:59.411  	 Malloc2p5           :       5.16     719.66       2.81       0.00     0.00  174979.13    2770.39  139174.63
00:10:59.411  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x200 length 0x200
00:10:59.411  	 Malloc2p5           :       5.21     712.63       2.78       0.00     0.00  176478.78    2934.23  149660.39
00:10:59.411  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x200
00:10:59.411  	 Malloc2p6           :       5.16     719.36       2.81       0.00     0.00  174696.61    2859.75  136314.88
00:10:59.411  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x200 length 0x200
00:10:59.411  	 Malloc2p6           :       5.21     712.36       2.78       0.00     0.00  176163.03    2770.39  147753.89
00:10:59.411  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x200
00:10:59.411  	 Malloc2p7           :       5.16     719.08       2.81       0.00     0.00  174406.99    2785.28  134408.38
00:10:59.411  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x200 length 0x200
00:10:59.411  	 Malloc2p7           :       5.21     712.10       2.78       0.00     0.00  175864.97    2040.55  145847.39
00:10:59.411  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x1000
00:10:59.411  	 TestPT              :       5.19     715.47       2.79       0.00     0.00  174790.38    8281.37  133455.13
00:10:59.411  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x1000 length 0x1000
00:10:59.411  	 TestPT              :       5.23     710.26       2.77       0.00     0.00  175961.88   10426.18  146800.64
00:10:59.411  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x2000
00:10:59.411  	 raid0               :       5.17     718.40       2.81       0.00     0.00  173867.29    2323.55  127735.62
00:10:59.411  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x2000 length 0x2000
00:10:59.411  	 raid0               :       5.22     711.53       2.78       0.00     0.00  175317.01    3172.54  135361.63
00:10:59.411  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x2000
00:10:59.411  	 concat0             :       5.17     718.11       2.81       0.00     0.00  173591.20    3172.54  123922.62
00:10:59.411  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x2000 length 0x2000
00:10:59.411  	 concat0             :       5.22     711.25       2.78       0.00     0.00  174982.35    3053.38  134408.38
00:10:59.411  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x1000
00:10:59.411  	 raid1               :       5.20     738.87       2.89       0.00     0.00  168361.40    3485.32  120109.61
00:10:59.411  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x1000 length 0x1000
00:10:59.411  	 raid1               :       5.22     710.92       2.78       0.00     0.00  174670.00    4468.36  128688.87
00:10:59.411  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x0 length 0x4e2
00:10:59.411  	 AIO0                :       5.20     738.08       2.88       0.00     0.00  168110.50    2651.23  129642.12
00:10:59.411  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:10:59.411  	 Verification LBA range: start 0x4e2 length 0x4e2
00:10:59.411  	 AIO0                :       5.23     710.14       2.77       0.00     0.00  174151.01    4468.36  131548.63
00:10:59.411  
[2024-11-18T05:56:20.389Z]  ===================================================================================================================
00:10:59.411  
[2024-11-18T05:56:20.389Z]  Total                       :              24283.95      94.86       0.00     0.00  166007.22     573.44  305040.29
00:10:59.670  
00:10:59.670  real	0m6.242s
00:10:59.670  user	0m11.350s
00:10:59.670  sys	0m0.602s
00:10:59.670   05:56:20 blockdev_general.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:59.670  ************************************
00:10:59.670  END TEST bdev_verify
00:10:59.670  ************************************
00:10:59.670   05:56:20 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:10:59.670   05:56:20 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:59.670   05:56:20 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:10:59.670   05:56:20 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:59.671   05:56:20 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:59.671  ************************************
00:10:59.671  START TEST bdev_verify_big_io
00:10:59.671  ************************************
00:10:59.671   05:56:20 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:10:59.671  [2024-11-18 05:56:20.517993] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:10:59.671  [2024-11-18 05:56:20.518212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82251 ]
00:10:59.930  [2024-11-18 05:56:20.670350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:59.930  [2024-11-18 05:56:20.692808] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:59.930  [2024-11-18 05:56:20.692888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:59.930  [2024-11-18 05:56:20.809205] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:59.930  [2024-11-18 05:56:20.809290] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:59.930  [2024-11-18 05:56:20.817147] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:59.930  [2024-11-18 05:56:20.817209] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:59.930  [2024-11-18 05:56:20.825184] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:59.930  [2024-11-18 05:56:20.825237] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:59.930  [2024-11-18 05:56:20.825253] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:59.930  [2024-11-18 05:56:20.903333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:59.930  [2024-11-18 05:56:20.903417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:59.930  [2024-11-18 05:56:20.903442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:10:59.930  [2024-11-18 05:56:20.903456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:59.930  [2024-11-18 05:56:20.906445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:59.930  [2024-11-18 05:56:20.906500] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:00.189  [2024-11-18 05:56:21.041060] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.042278] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.043421] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.044285] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.045371] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.046211] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.047399] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.048329] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.049477] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.050336] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.051478] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.052377] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.053525] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.054777] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.055592] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.056857] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:11:00.189  [2024-11-18 05:56:21.076370] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:11:00.189  [2024-11-18 05:56:21.078191] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:11:00.189  Running I/O for 5 seconds...
00:11:06.839       2540.00 IOPS,   158.75 MiB/s
[2024-11-18T05:56:27.817Z]      5098.50 IOPS,   318.66 MiB/s
00:11:06.839                                                                                                  Latency(us)
00:11:06.839  
[2024-11-18T05:56:27.817Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:06.839  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x100
00:11:06.839  	 Malloc0             :       5.66     226.30      14.14       0.00     0.00  555681.23     763.35 1563331.49
00:11:06.839  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x100 length 0x100
00:11:06.839  	 Malloc0             :       5.71     201.68      12.61       0.00     0.00  624269.60     796.86 1830241.75
00:11:06.839  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x80
00:11:06.839  	 Malloc1p0           :       6.34      42.87       2.68       0.00     0.00 2744424.00    1660.74 4606108.39
00:11:06.839  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x80 length 0x80
00:11:06.839  	 Malloc1p0           :       6.01     111.18       6.95       0.00     0.00 1071550.84    2576.76 2181038.08
00:11:06.839  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x80
00:11:06.839  	 Malloc1p1           :       6.35      42.87       2.68       0.00     0.00 2665663.51    1280.93 4453588.25
00:11:06.839  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x80 length 0x80
00:11:06.839  	 Malloc1p1           :       6.44      42.24       2.64       0.00     0.00 2688181.46    1243.69 4545100.33
00:11:06.839  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p0           :       6.00      32.00       2.00       0.00     0.00  905195.78     573.44 1738729.66
00:11:06.839  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p0           :       6.01      31.92       2.00       0.00     0.00  910822.60     547.37 1593835.52
00:11:06.839  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p1           :       6.00      31.99       2.00       0.00     0.00  898595.97     532.48 1715851.64
00:11:06.839  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p1           :       6.02      31.92       1.99       0.00     0.00  904356.28     532.48 1578583.51
00:11:06.839  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p2           :       6.00      31.98       2.00       0.00     0.00  892336.77     707.49 1692973.61
00:11:06.839  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p2           :       6.02      31.91       1.99       0.00     0.00  897877.13     644.19 1555705.48
00:11:06.839  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p3           :       6.01      31.97       2.00       0.00     0.00  885078.14     551.10 1670095.59
00:11:06.839  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p3           :       6.02      31.90       1.99       0.00     0.00  891004.05     539.93 1532827.46
00:11:06.839  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p4           :       6.01      31.96       2.00       0.00     0.00  878184.72     606.95 1647217.57
00:11:06.839  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p4           :       6.02      31.90       1.99       0.00     0.00  884434.84     666.53 1509949.44
00:11:06.839  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p5           :       6.01      31.96       2.00       0.00     0.00  871161.71     543.65 1624339.55
00:11:06.839  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p5           :       6.02      31.89       1.99       0.00     0.00  877577.57     677.70 1487071.42
00:11:06.839  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p6           :       6.01      31.94       2.00       0.00     0.00  864458.93     592.06 1601461.53
00:11:06.839  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p6           :       6.02      31.88       1.99       0.00     0.00  871546.19     606.95 1464193.40
00:11:06.839  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x20
00:11:06.839  	 Malloc2p7           :       6.09      34.17       2.14       0.00     0.00  805406.12     539.93 1578583.51
00:11:06.839  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x20 length 0x20
00:11:06.839  	 Malloc2p7           :       6.02      31.88       1.99       0.00     0.00  865076.06     547.37 1448941.38
00:11:06.839  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x100
00:11:06.839  	 TestPT              :       6.48      46.93       2.93       0.00     0.00 2249166.92    1571.37 4148547.96
00:11:06.839  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x100 length 0x100
00:11:06.839  	 TestPT              :       6.50      40.01       2.50       0.00     0.00 2625044.02   67204.19 3629979.46
00:11:06.839  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x200
00:11:06.839  	 raid0               :       6.23      53.48       3.34       0.00     0.00 1946459.96    1362.85 3996027.81
00:11:06.839  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x200 length 0x200
00:11:06.839  	 raid0               :       6.51      49.14       3.07       0.00     0.00 2122561.20    1489.45 4087539.90
00:11:06.839  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x200
00:11:06.839  	 concat0             :       6.35      64.07       4.00       0.00     0.00 1598452.12    1563.93 3843507.67
00:11:06.839  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x200 length 0x200
00:11:06.839  	 concat0             :       6.53      53.92       3.37       0.00     0.00 1906889.83    1482.01 3965523.78
00:11:06.839  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x100
00:11:06.839  	 raid1               :       6.48      64.18       4.01       0.00     0.00 1554439.81    2249.08 3690987.52
00:11:06.839  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x100 length 0x100
00:11:06.839  	 raid1               :       6.50      83.49       5.22       0.00     0.00 1206553.01    2100.13 3813003.64
00:11:06.839  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x0 length 0x4e
00:11:06.839  	 AIO0                :       6.53      75.02       4.69       0.00     0.00  792169.84    1392.64 2272550.17
00:11:06.839  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:11:06.839  	 Verification LBA range: start 0x4e length 0x4e
00:11:06.839  	 AIO0                :       6.51      60.84       3.80       0.00     0.00  988672.06    2025.66 2272550.17
00:11:06.839  
[2024-11-18T05:56:27.817Z]  ===================================================================================================================
00:11:06.839  
[2024-11-18T05:56:27.817Z]  Total                       :               1771.41     110.71       0.00     0.00 1216311.00     532.48 4606108.39
00:11:07.098  
00:11:07.098  real	0m7.556s
00:11:07.098  user	0m14.199s
00:11:07.098  sys	0m0.425s
00:11:07.098   05:56:28 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:07.098   05:56:28 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:11:07.098  ************************************
00:11:07.098  END TEST bdev_verify_big_io
00:11:07.098  ************************************
00:11:07.098   05:56:28 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:07.098   05:56:28 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:07.098   05:56:28 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:07.098   05:56:28 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:07.098  ************************************
00:11:07.098  START TEST bdev_write_zeroes
00:11:07.098  ************************************
00:11:07.098   05:56:28 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:07.358  [2024-11-18 05:56:28.127223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:07.358  [2024-11-18 05:56:28.127417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82349 ]
00:11:07.358  [2024-11-18 05:56:28.278087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:07.358  [2024-11-18 05:56:28.299786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:07.617  [2024-11-18 05:56:28.408309] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:07.617  [2024-11-18 05:56:28.408426] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:07.617  [2024-11-18 05:56:28.416270] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:07.617  [2024-11-18 05:56:28.416355] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:07.617  [2024-11-18 05:56:28.424304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:07.617  [2024-11-18 05:56:28.424386] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:07.617  [2024-11-18 05:56:28.424404] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:07.617  [2024-11-18 05:56:28.502926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:07.617  [2024-11-18 05:56:28.503035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:07.617  [2024-11-18 05:56:28.503058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x516000008a80
00:11:07.617  [2024-11-18 05:56:28.503071] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:07.617  [2024-11-18 05:56:28.505494] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:07.617  [2024-11-18 05:56:28.505551] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:07.884  Running I/O for 1 seconds...
00:11:08.827      81913.00 IOPS,   319.97 MiB/s
00:11:08.827                                                                                                  Latency(us)
00:11:08.827  
[2024-11-18T05:56:29.805Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:08.827  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.827  	 Malloc0             :       1.06    5076.31      19.83       0.00     0.00   25195.34     651.64   50522.30
00:11:08.827  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc1p0           :       1.06    5069.39      19.80       0.00     0.00   25195.95     759.62   49330.73
00:11:08.828  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc1p1           :       1.06    5063.61      19.78       0.00     0.00   25179.50     826.65   47424.23
00:11:08.828  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p0           :       1.06    5057.88      19.76       0.00     0.00   25160.45     748.45   45756.04
00:11:08.828  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p1           :       1.06    5051.55      19.73       0.00     0.00   25148.51     763.35   43849.54
00:11:08.828  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p2           :       1.07    5046.05      19.71       0.00     0.00   25134.63     707.49   41943.04
00:11:08.828  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p3           :       1.07    5040.56      19.69       0.00     0.00   25119.86     711.21   39798.23
00:11:08.828  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p4           :       1.07    5034.82      19.67       0.00     0.00   25103.41     696.32   38368.35
00:11:08.828  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p5           :       1.07    5029.15      19.65       0.00     0.00   25084.83     703.77   40036.54
00:11:08.828  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p6           :       1.07    5023.71      19.62       0.00     0.00   25069.11     707.49   42181.35
00:11:08.828  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 Malloc2p7           :       1.07    5018.28      19.60       0.00     0.00   25049.97     711.21   43849.54
00:11:08.828  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 TestPT              :       1.07    5012.44      19.58       0.00     0.00   25034.92     744.73   45517.73
00:11:08.828  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 raid0               :       1.07    5006.05      19.55       0.00     0.00   25007.74    1563.93   47424.23
00:11:08.828  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 concat0             :       1.08    4999.84      19.53       0.00     0.00   24952.80    1482.01   49092.42
00:11:08.828  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 raid1               :       1.08    4991.53      19.50       0.00     0.00   24898.67    2398.02   50760.61
00:11:08.828  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:08.828  	 AIO0                :       1.08    4980.23      19.45       0.00     0.00   24832.48    1541.59   52428.80
00:11:08.828  
[2024-11-18T05:56:29.806Z]  ===================================================================================================================
00:11:08.828  
[2024-11-18T05:56:29.806Z]  Total                       :              80501.41     314.46       0.00     0.00   25073.03     651.64   52428.80
00:11:09.086  
00:11:09.086  real	0m1.971s
00:11:09.086  user	0m1.533s
00:11:09.086  sys	0m0.277s
00:11:09.086   05:56:30 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:09.086  ************************************
00:11:09.086   05:56:30 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:11:09.086  END TEST bdev_write_zeroes
00:11:09.086  ************************************
00:11:09.346   05:56:30 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:09.346   05:56:30 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:09.346   05:56:30 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:09.346   05:56:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:09.346  ************************************
00:11:09.346  START TEST bdev_json_nonenclosed
00:11:09.346  ************************************
00:11:09.346   05:56:30 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:09.346  [2024-11-18 05:56:30.144352] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:09.346  [2024-11-18 05:56:30.144745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82391 ]
00:11:09.346  [2024-11-18 05:56:30.287603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:09.346  [2024-11-18 05:56:30.311062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:09.346  [2024-11-18 05:56:30.311238] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:11:09.346  [2024-11-18 05:56:30.311275] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:11:09.346  [2024-11-18 05:56:30.311292] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:09.606  
00:11:09.606  real	0m0.292s
00:11:09.606  user	0m0.121s
00:11:09.606  sys	0m0.071s
00:11:09.606   05:56:30 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:09.606   05:56:30 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:11:09.606  ************************************
00:11:09.606  END TEST bdev_json_nonenclosed
00:11:09.606  ************************************
00:11:09.606   05:56:30 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:09.606   05:56:30 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:09.606   05:56:30 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:09.606   05:56:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:09.606  ************************************
00:11:09.606  START TEST bdev_json_nonarray
00:11:09.606  ************************************
00:11:09.606   05:56:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:09.606  [2024-11-18 05:56:30.498676] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:09.606  [2024-11-18 05:56:30.498950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82417 ]
00:11:09.865  [2024-11-18 05:56:30.653509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:09.865  [2024-11-18 05:56:30.673670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:09.865  [2024-11-18 05:56:30.673864] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:11:09.865  [2024-11-18 05:56:30.673902] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:11:09.865  [2024-11-18 05:56:30.673920] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:09.865  
00:11:09.865  real	0m0.308s
00:11:09.865  user	0m0.117s
00:11:09.865  sys	0m0.091s
00:11:09.865   05:56:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:09.865   05:56:30 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:11:09.865  ************************************
00:11:09.865  END TEST bdev_json_nonarray
00:11:09.865  ************************************
00:11:09.865   05:56:30 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]]
00:11:09.865   05:56:30 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite ''
00:11:09.865   05:56:30 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:09.865   05:56:30 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:09.865   05:56:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:09.865  ************************************
00:11:09.865  START TEST bdev_qos
00:11:09.865  ************************************
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@1129 -- # qos_test_suite ''
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=82442
00:11:09.865  Process qos testing pid: 82442
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 82442'
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 82442
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # '[' -z 82442 ']'
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:09.865  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:09.865   05:56:30 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.123  [2024-11-18 05:56:30.863849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:10.123  [2024-11-18 05:56:30.864033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82442 ]
00:11:10.123  [2024-11-18 05:56:31.024230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:10.123  [2024-11-18 05:56:31.050722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@868 -- # return 0
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382  Malloc_0
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_0
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382  [
00:11:10.382  {
00:11:10.382  "name": "Malloc_0",
00:11:10.382  "aliases": [
00:11:10.382  "88372a3f-05d8-4e1a-9f16-2d29ee9abf6b"
00:11:10.382  ],
00:11:10.382  "product_name": "Malloc disk",
00:11:10.382  "block_size": 512,
00:11:10.382  "num_blocks": 262144,
00:11:10.382  "uuid": "88372a3f-05d8-4e1a-9f16-2d29ee9abf6b",
00:11:10.382  "assigned_rate_limits": {
00:11:10.382  "rw_ios_per_sec": 0,
00:11:10.382  "rw_mbytes_per_sec": 0,
00:11:10.382  "r_mbytes_per_sec": 0,
00:11:10.382  "w_mbytes_per_sec": 0
00:11:10.382  },
00:11:10.382  "claimed": false,
00:11:10.382  "zoned": false,
00:11:10.382  "supported_io_types": {
00:11:10.382  "read": true,
00:11:10.382  "write": true,
00:11:10.382  "unmap": true,
00:11:10.382  "flush": true,
00:11:10.382  "reset": true,
00:11:10.382  "nvme_admin": false,
00:11:10.382  "nvme_io": false,
00:11:10.382  "nvme_io_md": false,
00:11:10.382  "write_zeroes": true,
00:11:10.382  "zcopy": true,
00:11:10.382  "get_zone_info": false,
00:11:10.382  "zone_management": false,
00:11:10.382  "zone_append": false,
00:11:10.382  "compare": false,
00:11:10.382  "compare_and_write": false,
00:11:10.382  "abort": true,
00:11:10.382  "seek_hole": false,
00:11:10.382  "seek_data": false,
00:11:10.382  "copy": true,
00:11:10.382  "nvme_iov_md": false
00:11:10.382  },
00:11:10.382  "memory_domains": [
00:11:10.382  {
00:11:10.382  "dma_device_id": "system",
00:11:10.382  "dma_device_type": 1
00:11:10.382  },
00:11:10.382  {
00:11:10.382  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:10.382  "dma_device_type": 2
00:11:10.382  }
00:11:10.382  ],
00:11:10.382  "driver_specific": {}
00:11:10.382  }
00:11:10.382  ]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382  Null_1
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Null_1
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:10.382   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:10.382  [
00:11:10.382  {
00:11:10.383  "name": "Null_1",
00:11:10.383  "aliases": [
00:11:10.383  "bdd6d4a3-5616-4976-bdae-d15d78bd623e"
00:11:10.383  ],
00:11:10.383  "product_name": "Null disk",
00:11:10.383  "block_size": 512,
00:11:10.383  "num_blocks": 262144,
00:11:10.383  "uuid": "bdd6d4a3-5616-4976-bdae-d15d78bd623e",
00:11:10.383  "assigned_rate_limits": {
00:11:10.383  "rw_ios_per_sec": 0,
00:11:10.383  "rw_mbytes_per_sec": 0,
00:11:10.383  "r_mbytes_per_sec": 0,
00:11:10.383  "w_mbytes_per_sec": 0
00:11:10.383  },
00:11:10.383  "claimed": false,
00:11:10.383  "zoned": false,
00:11:10.383  "supported_io_types": {
00:11:10.383  "read": true,
00:11:10.383  "write": true,
00:11:10.383  "unmap": false,
00:11:10.383  "flush": false,
00:11:10.383  "reset": true,
00:11:10.383  "nvme_admin": false,
00:11:10.383  "nvme_io": false,
00:11:10.383  "nvme_io_md": false,
00:11:10.383  "write_zeroes": true,
00:11:10.383  "zcopy": false,
00:11:10.383  "get_zone_info": false,
00:11:10.383  "zone_management": false,
00:11:10.383  "zone_append": false,
00:11:10.383  "compare": false,
00:11:10.383  "compare_and_write": false,
00:11:10.383  "abort": true,
00:11:10.383  "seek_hole": false,
00:11:10.383  "seek_data": false,
00:11:10.383  "copy": false,
00:11:10.383  "nvme_iov_md": false
00:11:10.383  },
00:11:10.383  "driver_specific": {}
00:11:10.383  }
00:11:10.383  ]
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0
00:11:10.383   05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0
00:11:10.383    05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0
00:11:10.383    05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:11:10.383    05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:10.383    05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:10.383     05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:10.383     05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:10.383     05:56:31 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:11:10.641  Running I/O for 60 seconds...
00:11:12.516     132096.00 IOPS,   516.00 MiB/s
[2024-11-18T05:56:34.431Z]    132096.00 IOPS,   516.00 MiB/s
[2024-11-18T05:56:35.810Z]    132437.33 IOPS,   517.33 MiB/s
[2024-11-18T05:56:36.748Z]    132864.00 IOPS,   519.00 MiB/s
[2024-11-18T05:56:36.748Z]    132608.00 IOPS,   518.00 MiB/s
[2024-11-18T05:56:36.748Z]   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  66957.72  267830.87  0.00       0.00       270336.00  0.00     0.00   '
00:11:15.770    05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:11:15.770     05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:11:15.770    05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=66957.72
00:11:15.770    05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 66957
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=66957
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=16000
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 16000 -gt 1000 ']'
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 16000 Malloc_0
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 16000 IOPS Malloc_0
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:15.770   05:56:36 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:15.770  ************************************
00:11:15.770  START TEST bdev_qos_iops
00:11:15.770  ************************************
00:11:15.770   05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1129 -- # run_qos_test 16000 IOPS Malloc_0
00:11:15.770   05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=16000
00:11:15.770   05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:15.770    05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0
00:11:15.770    05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:11:15.770    05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:15.770    05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:15.770     05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:15.770     05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1
00:11:15.770     05:56:36 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:17.644     118714.67 IOPS,   463.73 MiB/s
[2024-11-18T05:56:39.559Z]    107956.57 IOPS,   421.71 MiB/s
[2024-11-18T05:56:40.496Z]     99962.00 IOPS,   390.48 MiB/s
[2024-11-18T05:56:41.433Z]     93651.56 IOPS,   365.83 MiB/s
[2024-11-18T05:56:41.693Z]     88712.00 IOPS,   346.53 MiB/s
[2024-11-18T05:56:41.693Z]   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  16015.68  64062.71   0.00       0.00       65024.00   0.00     0.00   '
00:11:20.715    05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:11:20.715     05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:11:20.715    05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=16015.68
00:11:20.715    05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 16015
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=16015
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']'
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=14400
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=17600
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16015 -lt 14400 ']'
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 16015 -gt 17600 ']'
00:11:20.715  
00:11:20.715  real	0m5.225s
00:11:20.715  user	0m0.132s
00:11:20.715  sys	0m0.042s
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:20.715   05:56:41 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x
00:11:20.715  ************************************
00:11:20.715  END TEST bdev_qos_iops
00:11:20.715  ************************************
00:11:20.974    05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1
00:11:20.974    05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:20.974    05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:11:20.974    05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:20.974     05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:20.974     05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1
00:11:20.974     05:56:41 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:11:22.590      84548.36 IOPS,   330.27 MiB/s
[2024-11-18T05:56:44.503Z]     81108.00 IOPS,   316.83 MiB/s
[2024-11-18T05:56:45.439Z]     78231.38 IOPS,   305.59 MiB/s
[2024-11-18T05:56:46.816Z]     75782.86 IOPS,   296.03 MiB/s
[2024-11-18T05:56:47.075Z]     73609.60 IOPS,   287.54 MiB/s
[2024-11-18T05:56:47.075Z]   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    27203.73  108814.92  0.00       0.00       110592.00  0.00     0.00   '
00:11:26.097    05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:11:26.097    05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:26.097     05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:11:26.097    05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=110592.00
00:11:26.097    05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 110592
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=110592
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=10
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 10 -lt 2 ']'
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 10 Null_1
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 10 BANDWIDTH Null_1
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:26.097   05:56:46 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:26.097  ************************************
00:11:26.097  START TEST bdev_qos_bw
00:11:26.097  ************************************
00:11:26.097   05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1129 -- # run_qos_test 10 BANDWIDTH Null_1
00:11:26.097   05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=10
00:11:26.097   05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:26.097    05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1
00:11:26.097    05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:26.097    05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:11:26.097    05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:26.097     05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:26.097     05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1
00:11:26.097     05:56:47 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1
00:11:27.600      71021.50 IOPS,   277.43 MiB/s
[2024-11-18T05:56:49.512Z]     67939.88 IOPS,   265.39 MiB/s
[2024-11-18T05:56:50.449Z]     65197.61 IOPS,   254.68 MiB/s
[2024-11-18T05:56:51.827Z]     62745.89 IOPS,   245.10 MiB/s
[2024-11-18T05:56:52.395Z]     60543.20 IOPS,   236.50 MiB/s
[2024-11-18T05:56:52.395Z]   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    2560.72   10242.90   0.00       0.00       10412.00  0.00     0.00   '
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:31.417     05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=10412.00
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 10412
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=10412
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=10240
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=9216
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=11264
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 10412 -lt 9216 ']'
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 10412 -gt 11264 ']'
00:11:31.417  
00:11:31.417  real	0m5.258s
00:11:31.417  user	0m0.134s
00:11:31.417  sys	0m0.037s
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x
00:11:31.417  ************************************
00:11:31.417  END TEST bdev_qos_bw
00:11:31.417  ************************************
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:31.417   05:56:52 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:31.417  ************************************
00:11:31.417  START TEST bdev_qos_ro_bw
00:11:31.417  ************************************
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1129 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2
00:11:31.417   05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:31.417    05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:31.417     05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1
00:11:31.417     05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:31.417     05:56:52 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:32.614      58461.95 IOPS,   228.37 MiB/s
[2024-11-18T05:56:54.529Z]     55944.23 IOPS,   218.53 MiB/s
[2024-11-18T05:56:55.466Z]     53645.39 IOPS,   209.55 MiB/s
[2024-11-18T05:56:56.844Z]     51538.17 IOPS,   201.32 MiB/s
[2024-11-18T05:56:57.812Z]     49599.60 IOPS,   193.75 MiB/s
[2024-11-18T05:56:57.812Z]     47810.08 IOPS,   186.76 MiB/s
[2024-11-18T05:56:57.812Z]   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  511.56   2046.25    0.00       0.00       2064.00   0.00     0.00   '
00:11:36.834    05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:11:36.834    05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:36.834     05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:11:36.834    05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2064.00
00:11:36.834    05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2064
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2064
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -lt 1843 ']'
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2064 -gt 2252 ']'
00:11:36.834  
00:11:36.834  real	0m5.194s
00:11:36.834  user	0m0.132s
00:11:36.834  sys	0m0.035s
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:36.834   05:56:57 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x
00:11:36.834  ************************************
00:11:36.834  END TEST bdev_qos_ro_bw
00:11:36.834  ************************************
00:11:36.834   05:56:57 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:11:36.834   05:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:36.834   05:56:57 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.402  
00:11:37.402                                                                                                  Latency(us)
00:11:37.402  
[2024-11-18T05:56:58.380Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:37.402  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:11:37.402  	 Malloc_0            :      26.73   22164.33      86.58       0.00     0.00   11442.99    3261.91  507129.48
00:11:37.402  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:11:37.402  	 Null_1              :      26.85   24318.76      95.00       0.00     0.00   10501.85     975.59  119156.36
00:11:37.402  
[2024-11-18T05:56:58.380Z]  ===================================================================================================================
00:11:37.402  
[2024-11-18T05:56:58.380Z]  Total                       :              46483.09     181.57       0.00     0.00   10949.60     975.59  507129.48
00:11:37.402  {
00:11:37.402    "results": [
00:11:37.402      {
00:11:37.402        "job": "Malloc_0",
00:11:37.402        "core_mask": "0x2",
00:11:37.402        "workload": "randread",
00:11:37.402        "status": "finished",
00:11:37.402        "queue_depth": 256,
00:11:37.402        "io_size": 4096,
00:11:37.402        "runtime": 26.732232,
00:11:37.402        "iops": 22164.329562903687,
00:11:37.402        "mibps": 86.57941235509253,
00:11:37.402        "io_failed": 0,
00:11:37.402        "io_timeout": 0,
00:11:37.402        "avg_latency_us": 11442.985476848411,
00:11:37.402        "min_latency_us": 3261.9054545454546,
00:11:37.402        "max_latency_us": 507129.48363636364
00:11:37.402      },
00:11:37.402      {
00:11:37.402        "job": "Null_1",
00:11:37.402        "core_mask": "0x2",
00:11:37.402        "workload": "randread",
00:11:37.402        "status": "finished",
00:11:37.402        "queue_depth": 256,
00:11:37.402        "io_size": 4096,
00:11:37.402        "runtime": 26.84689,
00:11:37.402        "iops": 24318.75721917883,
00:11:37.402        "mibps": 94.99514538741731,
00:11:37.402        "io_failed": 0,
00:11:37.402        "io_timeout": 0,
00:11:37.402        "avg_latency_us": 10501.851797864938,
00:11:37.402        "min_latency_us": 975.5927272727273,
00:11:37.402        "max_latency_us": 119156.36363636363
00:11:37.402      }
00:11:37.402    ],
00:11:37.402    "core_count": 1
00:11:37.402  }
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 82442
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' -z 82442 ']'
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # kill -0 82442
00:11:37.402    05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # uname
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:37.402    05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82442
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:11:37.402  killing process with pid 82442
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82442'
00:11:37.402  Received shutdown signal, test time was about 26.891439 seconds
00:11:37.402  
00:11:37.402                                                                                                  Latency(us)
00:11:37.402  
[2024-11-18T05:56:58.380Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:37.402  
[2024-11-18T05:56:58.380Z]  ===================================================================================================================
00:11:37.402  
[2024-11-18T05:56:58.380Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # kill 82442
00:11:37.402   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@978 -- # wait 82442
00:11:37.662   05:56:58 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT
00:11:37.662  
00:11:37.662  real	0m27.639s
00:11:37.662  user	0m28.403s
00:11:37.662  sys	0m0.580s
00:11:37.662   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:37.662   05:56:58 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.662  ************************************
00:11:37.662  END TEST bdev_qos
00:11:37.662  ************************************
00:11:37.662   05:56:58 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:11:37.662   05:56:58 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:37.662   05:56:58 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:37.662   05:56:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:37.662  ************************************
00:11:37.662  START TEST bdev_qd_sampling
00:11:37.662  ************************************
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1129 -- # qd_sampling_test_suite ''
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=82848
00:11:37.662  Process bdev QD sampling period testing pid: 82848
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 82848'
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 82848
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # '[' -z 82848 ']'
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:37.662  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:37.662   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:37.662  [2024-11-18 05:56:58.554100] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:37.662  [2024-11-18 05:56:58.554289] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82848 ]
00:11:37.922  [2024-11-18 05:56:58.714388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:37.922  [2024-11-18 05:56:58.742444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:37.922  [2024-11-18 05:56:58.742489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@868 -- # return 0
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:37.922  Malloc_QD
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_QD
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # local i
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:37.922  [
00:11:37.922  {
00:11:37.922  "name": "Malloc_QD",
00:11:37.922  "aliases": [
00:11:37.922  "68008974-2e53-4f8f-b5c7-0c670e16db21"
00:11:37.922  ],
00:11:37.922  "product_name": "Malloc disk",
00:11:37.922  "block_size": 512,
00:11:37.922  "num_blocks": 262144,
00:11:37.922  "uuid": "68008974-2e53-4f8f-b5c7-0c670e16db21",
00:11:37.922  "assigned_rate_limits": {
00:11:37.922  "rw_ios_per_sec": 0,
00:11:37.922  "rw_mbytes_per_sec": 0,
00:11:37.922  "r_mbytes_per_sec": 0,
00:11:37.922  "w_mbytes_per_sec": 0
00:11:37.922  },
00:11:37.922  "claimed": false,
00:11:37.922  "zoned": false,
00:11:37.922  "supported_io_types": {
00:11:37.922  "read": true,
00:11:37.922  "write": true,
00:11:37.922  "unmap": true,
00:11:37.922  "flush": true,
00:11:37.922  "reset": true,
00:11:37.922  "nvme_admin": false,
00:11:37.922  "nvme_io": false,
00:11:37.922  "nvme_io_md": false,
00:11:37.922  "write_zeroes": true,
00:11:37.922  "zcopy": true,
00:11:37.922  "get_zone_info": false,
00:11:37.922  "zone_management": false,
00:11:37.922  "zone_append": false,
00:11:37.922  "compare": false,
00:11:37.922  "compare_and_write": false,
00:11:37.922  "abort": true,
00:11:37.922  "seek_hole": false,
00:11:37.922  "seek_data": false,
00:11:37.922  "copy": true,
00:11:37.922  "nvme_iov_md": false
00:11:37.922  },
00:11:37.922  "memory_domains": [
00:11:37.922  {
00:11:37.922  "dma_device_id": "system",
00:11:37.922  "dma_device_type": 1
00:11:37.922  },
00:11:37.922  {
00:11:37.922  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:37.922  "dma_device_type": 2
00:11:37.922  }
00:11:37.922  ],
00:11:37.922  "driver_specific": {}
00:11:37.922  }
00:11:37.922  ]
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@911 -- # return 0
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2
00:11:37.922   05:56:58 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:38.181  Running I/O for 5 seconds...
00:11:40.054     106496.00 IOPS,   416.00 MiB/s
[2024-11-18T05:57:01.032Z]  05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{
00:11:40.054  "tick_rate": 2200000000,
00:11:40.054  "ticks": 1385998234688,
00:11:40.054  "bdevs": [
00:11:40.054  {
00:11:40.054  "name": "Malloc_QD",
00:11:40.054  "bytes_read": 825266688,
00:11:40.054  "num_read_ops": 201475,
00:11:40.054  "bytes_written": 0,
00:11:40.054  "num_write_ops": 0,
00:11:40.054  "bytes_unmapped": 0,
00:11:40.054  "num_unmap_ops": 0,
00:11:40.054  "bytes_copied": 0,
00:11:40.054  "num_copy_ops": 0,
00:11:40.054  "read_latency_ticks": 2134664676308,
00:11:40.054  "max_read_latency_ticks": 16060370,
00:11:40.054  "min_read_latency_ticks": 398412,
00:11:40.054  "write_latency_ticks": 0,
00:11:40.054  "max_write_latency_ticks": 0,
00:11:40.054  "min_write_latency_ticks": 0,
00:11:40.054  "unmap_latency_ticks": 0,
00:11:40.054  "max_unmap_latency_ticks": 0,
00:11:40.054  "min_unmap_latency_ticks": 0,
00:11:40.054  "copy_latency_ticks": 0,
00:11:40.054  "max_copy_latency_ticks": 0,
00:11:40.054  "min_copy_latency_ticks": 0,
00:11:40.054  "io_error": {},
00:11:40.054  "queue_depth_polling_period": 10,
00:11:40.054  "queue_depth": 512,
00:11:40.054  "io_time": 30,
00:11:40.054  "weighted_io_time": 15360
00:11:40.054  }
00:11:40.054  ]
00:11:40.054  }'
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']'
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']'
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:40.054  
00:11:40.054                                                                                                  Latency(us)
00:11:40.054  
[2024-11-18T05:57:01.032Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:40.054  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:11:40.054  	 Malloc_QD           :       1.94   52875.52     206.54       0.00     0.00    4828.71    1385.19    7000.44
00:11:40.054  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:11:40.054  	 Malloc_QD           :       1.94   53209.30     207.85       0.00     0.00    4798.81     997.93    7328.12
00:11:40.054  
[2024-11-18T05:57:01.032Z]  ===================================================================================================================
00:11:40.054  
[2024-11-18T05:57:01.032Z]  Total                       :             106084.82     414.39       0.00     0.00    4813.71     997.93    7328.12
00:11:40.054  {
00:11:40.054    "results": [
00:11:40.054      {
00:11:40.054        "job": "Malloc_QD",
00:11:40.054        "core_mask": "0x1",
00:11:40.054        "workload": "randread",
00:11:40.054        "status": "finished",
00:11:40.054        "queue_depth": 256,
00:11:40.054        "io_size": 4096,
00:11:40.054        "runtime": 1.936624,
00:11:40.054        "iops": 52875.519460669704,
00:11:40.054        "mibps": 206.54499789324103,
00:11:40.054        "io_failed": 0,
00:11:40.054        "io_timeout": 0,
00:11:40.054        "avg_latency_us": 4828.710290909091,
00:11:40.054        "min_latency_us": 1385.1927272727273,
00:11:40.054        "max_latency_us": 7000.436363636363
00:11:40.054      },
00:11:40.054      {
00:11:40.054        "job": "Malloc_QD",
00:11:40.054        "core_mask": "0x2",
00:11:40.054        "workload": "randread",
00:11:40.054        "status": "finished",
00:11:40.054        "queue_depth": 256,
00:11:40.054        "io_size": 4096,
00:11:40.054        "runtime": 1.938909,
00:11:40.054        "iops": 53209.30482039126,
00:11:40.054        "mibps": 207.84884695465337,
00:11:40.054        "io_failed": 0,
00:11:40.054        "io_timeout": 0,
00:11:40.054        "avg_latency_us": 4798.813300248139,
00:11:40.054        "min_latency_us": 997.9345454545454,
00:11:40.054        "max_latency_us": 7328.1163636363635
00:11:40.054      }
00:11:40.054    ],
00:11:40.054    "core_count": 2
00:11:40.054  }
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 82848
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' -z 82848 ']'
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # kill -0 82848
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # uname
00:11:40.054   05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:40.054    05:57:00 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82848
00:11:40.054   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:40.054   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:40.054   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82848'
00:11:40.054  killing process with pid 82848
00:11:40.054   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # kill 82848
00:11:40.054  Received shutdown signal, test time was about 1.995094 seconds
00:11:40.054  
00:11:40.054                                                                                                  Latency(us)
00:11:40.054  
[2024-11-18T05:57:01.032Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:40.054  
[2024-11-18T05:57:01.032Z]  ===================================================================================================================
00:11:40.054  
[2024-11-18T05:57:01.032Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:11:40.054   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@978 -- # wait 82848
00:11:40.314  ************************************
00:11:40.314  END TEST bdev_qd_sampling
00:11:40.314  ************************************
00:11:40.314   05:57:01 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT
00:11:40.314  
00:11:40.314  real	0m2.703s
00:11:40.314  user	0m5.174s
00:11:40.314  sys	0m0.286s
00:11:40.314   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:40.314   05:57:01 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:11:40.314   05:57:01 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite ''
00:11:40.314   05:57:01 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:40.314   05:57:01 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:40.314   05:57:01 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:40.314  ************************************
00:11:40.314  START TEST bdev_error
00:11:40.314  ************************************
00:11:40.314   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@1129 -- # error_test_suite ''
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=82912
00:11:40.314  Process error testing pid: 82912
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:11:40.314   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 82912'
00:11:40.315   05:57:01 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 82912
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 82912 ']'
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:40.315  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:40.315   05:57:01 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:40.573  [2024-11-18 05:57:01.311746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:40.573  [2024-11-18 05:57:01.311984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82912 ]
00:11:40.574  [2024-11-18 05:57:01.466345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:40.574  [2024-11-18 05:57:01.488494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:11:41.511   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.511  Dev_1
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.511  [
00:11:41.511  {
00:11:41.511  "name": "Dev_1",
00:11:41.511  "aliases": [
00:11:41.511  "b8549d24-cedb-4353-aa07-dfc8ac9ab503"
00:11:41.511  ],
00:11:41.511  "product_name": "Malloc disk",
00:11:41.511  "block_size": 512,
00:11:41.511  "num_blocks": 262144,
00:11:41.511  "uuid": "b8549d24-cedb-4353-aa07-dfc8ac9ab503",
00:11:41.511  "assigned_rate_limits": {
00:11:41.511  "rw_ios_per_sec": 0,
00:11:41.511  "rw_mbytes_per_sec": 0,
00:11:41.511  "r_mbytes_per_sec": 0,
00:11:41.511  "w_mbytes_per_sec": 0
00:11:41.511  },
00:11:41.511  "claimed": false,
00:11:41.511  "zoned": false,
00:11:41.511  "supported_io_types": {
00:11:41.511  "read": true,
00:11:41.511  "write": true,
00:11:41.511  "unmap": true,
00:11:41.511  "flush": true,
00:11:41.511  "reset": true,
00:11:41.511  "nvme_admin": false,
00:11:41.511  "nvme_io": false,
00:11:41.511  "nvme_io_md": false,
00:11:41.511  "write_zeroes": true,
00:11:41.511  "zcopy": true,
00:11:41.511  "get_zone_info": false,
00:11:41.511  "zone_management": false,
00:11:41.511  "zone_append": false,
00:11:41.511  "compare": false,
00:11:41.511  "compare_and_write": false,
00:11:41.511  "abort": true,
00:11:41.511  "seek_hole": false,
00:11:41.511  "seek_data": false,
00:11:41.511  "copy": true,
00:11:41.511  "nvme_iov_md": false
00:11:41.511  },
00:11:41.511  "memory_domains": [
00:11:41.511  {
00:11:41.511  "dma_device_id": "system",
00:11:41.511  "dma_device_type": 1
00:11:41.511  },
00:11:41.511  {
00:11:41.511  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:41.511  "dma_device_type": 2
00:11:41.511  }
00:11:41.511  ],
00:11:41.511  "driver_specific": {}
00:11:41.511  }
00:11:41.511  ]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:11:41.511   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.511  true
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.511  Dev_2
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.511   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:11:41.511   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.512  [
00:11:41.512  {
00:11:41.512  "name": "Dev_2",
00:11:41.512  "aliases": [
00:11:41.512  "d304c3e6-8bf0-4f4c-babf-9cb4d43ddaee"
00:11:41.512  ],
00:11:41.512  "product_name": "Malloc disk",
00:11:41.512  "block_size": 512,
00:11:41.512  "num_blocks": 262144,
00:11:41.512  "uuid": "d304c3e6-8bf0-4f4c-babf-9cb4d43ddaee",
00:11:41.512  "assigned_rate_limits": {
00:11:41.512  "rw_ios_per_sec": 0,
00:11:41.512  "rw_mbytes_per_sec": 0,
00:11:41.512  "r_mbytes_per_sec": 0,
00:11:41.512  "w_mbytes_per_sec": 0
00:11:41.512  },
00:11:41.512  "claimed": false,
00:11:41.512  "zoned": false,
00:11:41.512  "supported_io_types": {
00:11:41.512  "read": true,
00:11:41.512  "write": true,
00:11:41.512  "unmap": true,
00:11:41.512  "flush": true,
00:11:41.512  "reset": true,
00:11:41.512  "nvme_admin": false,
00:11:41.512  "nvme_io": false,
00:11:41.512  "nvme_io_md": false,
00:11:41.512  "write_zeroes": true,
00:11:41.512  "zcopy": true,
00:11:41.512  "get_zone_info": false,
00:11:41.512  "zone_management": false,
00:11:41.512  "zone_append": false,
00:11:41.512  "compare": false,
00:11:41.512  "compare_and_write": false,
00:11:41.512  "abort": true,
00:11:41.512  "seek_hole": false,
00:11:41.512  "seek_data": false,
00:11:41.512  "copy": true,
00:11:41.512  "nvme_iov_md": false
00:11:41.512  },
00:11:41.512  "memory_domains": [
00:11:41.512  {
00:11:41.512  "dma_device_id": "system",
00:11:41.512  "dma_device_type": 1
00:11:41.512  },
00:11:41.512  {
00:11:41.512  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:41.512  "dma_device_type": 2
00:11:41.512  }
00:11:41.512  ],
00:11:41.512  "driver_specific": {}
00:11:41.512  }
00:11:41.512  ]
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:11:41.512   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:41.512   05:57:02 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:41.512   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1
00:11:41.512   05:57:02 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:11:41.771  Running I/O for 5 seconds...
00:11:42.710   05:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 82912
00:11:42.710  Process is existed as continue on error is set. Pid: 82912
00:11:42.710   05:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 82912'
00:11:42.710   05:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.710   05:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:42.710   05:57:03 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.710   05:57:03 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5
00:11:42.710  Timeout while waiting for response:
00:11:42.710  
00:11:42.710  
00:11:43.648      71803.00 IOPS,   280.48 MiB/s
[2024-11-18T05:57:05.564Z]     81285.50 IOPS,   317.52 MiB/s
[2024-11-18T05:57:06.500Z]     84990.33 IOPS,   331.99 MiB/s
[2024-11-18T05:57:07.879Z]     86990.75 IOPS,   339.81 MiB/s
[2024-11-18T05:57:07.879Z]     87989.40 IOPS,   343.71 MiB/s
00:11:46.901                                                                                                  Latency(us)
00:11:46.901  
[2024-11-18T05:57:07.879Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:46.902  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:11:46.902  	 EE_Dev_1            :       0.88   35546.22     138.85       5.67     0.00     446.72     207.59     949.53
00:11:46.902  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:11:46.902  	 Dev_2               :       5.00   81659.73     318.98       0.00     0.00     192.77     136.84   13643.40
00:11:46.902  
[2024-11-18T05:57:07.880Z]  ===================================================================================================================
00:11:46.902  
[2024-11-18T05:57:07.880Z]  Total                       :             117205.96     457.84       5.67     0.00     210.88     136.84   13643.40
00:11:47.469   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 82912
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' -z 82912 ']'
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # kill -0 82912
00:11:47.469    05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # uname
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:47.469    05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82912
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:11:47.469  killing process with pid 82912
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82912'
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # kill 82912
00:11:47.469  Received shutdown signal, test time was about 5.000000 seconds
00:11:47.469  
00:11:47.469                                                                                                  Latency(us)
00:11:47.469  
[2024-11-18T05:57:08.447Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:47.469  
[2024-11-18T05:57:08.447Z]  ===================================================================================================================
00:11:47.469  
[2024-11-18T05:57:08.447Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:11:47.469   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@978 -- # wait 82912
00:11:47.728   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=82997
00:11:47.728   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:11:47.728  Process error testing pid: 82997
00:11:47.728   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 82997'
00:11:47.728   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 82997
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 82997 ']'
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:47.728  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:47.728   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:47.728  [2024-11-18 05:57:08.680458] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:47.728  [2024-11-18 05:57:08.680657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82997 ]
00:11:47.987  [2024-11-18 05:57:08.835288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:47.987  [2024-11-18 05:57:08.856863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:47.987   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:47.987   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:11:47.987   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:11:47.987   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:47.987   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:47.987  Dev_1
00:11:47.987   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:47.988   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1
00:11:47.988   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:11:47.988   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:47.988   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:11:47.988   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:47.988   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.247   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.247  [
00:11:48.247  {
00:11:48.247  "name": "Dev_1",
00:11:48.247  "aliases": [
00:11:48.247  "4f74cff5-d892-410f-9dcd-9fc10b1858ad"
00:11:48.247  ],
00:11:48.247  "product_name": "Malloc disk",
00:11:48.247  "block_size": 512,
00:11:48.247  "num_blocks": 262144,
00:11:48.247  "uuid": "4f74cff5-d892-410f-9dcd-9fc10b1858ad",
00:11:48.247  "assigned_rate_limits": {
00:11:48.247  "rw_ios_per_sec": 0,
00:11:48.247  "rw_mbytes_per_sec": 0,
00:11:48.247  "r_mbytes_per_sec": 0,
00:11:48.247  "w_mbytes_per_sec": 0
00:11:48.247  },
00:11:48.247  "claimed": false,
00:11:48.247  "zoned": false,
00:11:48.247  "supported_io_types": {
00:11:48.247  "read": true,
00:11:48.247  "write": true,
00:11:48.247  "unmap": true,
00:11:48.247  "flush": true,
00:11:48.247  "reset": true,
00:11:48.247  "nvme_admin": false,
00:11:48.247  "nvme_io": false,
00:11:48.247  "nvme_io_md": false,
00:11:48.247  "write_zeroes": true,
00:11:48.247  "zcopy": true,
00:11:48.247  "get_zone_info": false,
00:11:48.247  "zone_management": false,
00:11:48.247  "zone_append": false,
00:11:48.247  "compare": false,
00:11:48.247  "compare_and_write": false,
00:11:48.247  "abort": true,
00:11:48.247  "seek_hole": false,
00:11:48.247  "seek_data": false,
00:11:48.247  "copy": true,
00:11:48.247  "nvme_iov_md": false
00:11:48.247  },
00:11:48.247  "memory_domains": [
00:11:48.247  {
00:11:48.247  "dma_device_id": "system",
00:11:48.247  "dma_device_type": 1
00:11:48.247  },
00:11:48.247  {
00:11:48.248  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.248  "dma_device_type": 2
00:11:48.248  }
00:11:48.248  ],
00:11:48.248  "driver_specific": {}
00:11:48.248  }
00:11:48.248  ]
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:11:48.248   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.248  true
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:08 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:11:48.248   05:57:08 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.248  Dev_2
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.248  [
00:11:48.248  {
00:11:48.248  "name": "Dev_2",
00:11:48.248  "aliases": [
00:11:48.248  "6b21f016-99b1-4827-aa38-bfedbf2b1c0a"
00:11:48.248  ],
00:11:48.248  "product_name": "Malloc disk",
00:11:48.248  "block_size": 512,
00:11:48.248  "num_blocks": 262144,
00:11:48.248  "uuid": "6b21f016-99b1-4827-aa38-bfedbf2b1c0a",
00:11:48.248  "assigned_rate_limits": {
00:11:48.248  "rw_ios_per_sec": 0,
00:11:48.248  "rw_mbytes_per_sec": 0,
00:11:48.248  "r_mbytes_per_sec": 0,
00:11:48.248  "w_mbytes_per_sec": 0
00:11:48.248  },
00:11:48.248  "claimed": false,
00:11:48.248  "zoned": false,
00:11:48.248  "supported_io_types": {
00:11:48.248  "read": true,
00:11:48.248  "write": true,
00:11:48.248  "unmap": true,
00:11:48.248  "flush": true,
00:11:48.248  "reset": true,
00:11:48.248  "nvme_admin": false,
00:11:48.248  "nvme_io": false,
00:11:48.248  "nvme_io_md": false,
00:11:48.248  "write_zeroes": true,
00:11:48.248  "zcopy": true,
00:11:48.248  "get_zone_info": false,
00:11:48.248  "zone_management": false,
00:11:48.248  "zone_append": false,
00:11:48.248  "compare": false,
00:11:48.248  "compare_and_write": false,
00:11:48.248  "abort": true,
00:11:48.248  "seek_hole": false,
00:11:48.248  "seek_data": false,
00:11:48.248  "copy": true,
00:11:48.248  "nvme_iov_md": false
00:11:48.248  },
00:11:48.248  "memory_domains": [
00:11:48.248  {
00:11:48.248  "dma_device_id": "system",
00:11:48.248  "dma_device_type": 1
00:11:48.248  },
00:11:48.248  {
00:11:48.248  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:48.248  "dma_device_type": 2
00:11:48.248  }
00:11:48.248  ],
00:11:48.248  "driver_specific": {}
00:11:48.248  }
00:11:48.248  ]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:11:48.248   05:57:09 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:48.248   05:57:09 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 82997
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # local es=0
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # valid_exec_arg wait 82997
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # local arg=wait
00:11:48.248   05:57:09 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:48.248    05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # type -t wait
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:11:48.248   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # wait 82997
00:11:48.248  Running I/O for 5 seconds...
00:11:48.248  task offset: 149816 on job bdev=EE_Dev_1 fails
00:11:48.248  
00:11:48.248                                                                                                  Latency(us)
00:11:48.248  
[2024-11-18T05:57:09.226Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:48.248  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:11:48.248  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:11:48.248  	 EE_Dev_1            :       0.00   18121.91      70.79    4118.62     0.00     583.08     220.63    1042.62
00:11:48.248  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:11:48.248  	 Dev_2               :       0.00   14739.75      57.58       0.00     0.00     729.72     247.62    1273.48
00:11:48.248  
[2024-11-18T05:57:09.226Z]  ===================================================================================================================
00:11:48.248  
[2024-11-18T05:57:09.226Z]  Total                       :              32861.66     128.37    4118.62     0.00     662.61     220.63    1273.48
00:11:48.248  [2024-11-18 05:57:09.192592] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:48.248  request:
00:11:48.248  {
00:11:48.248    "method": "perform_tests",
00:11:48.248    "req_id": 1
00:11:48.248  }
00:11:48.248  Got JSON-RPC error response
00:11:48.248  response:
00:11:48.248  {
00:11:48.248    "code": -32603,
00:11:48.248    "message": "bdevperf failed with error Operation not permitted"
00:11:48.248  }
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # es=255
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@664 -- # es=127
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@665 -- # case "$es" in
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@672 -- # es=1
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:11:48.506  
00:11:48.506  real	0m8.172s
00:11:48.506  user	0m8.492s
00:11:48.506  sys	0m0.573s
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:48.506   05:57:09 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:11:48.506  ************************************
00:11:48.506  END TEST bdev_error
00:11:48.506  ************************************
00:11:48.506   05:57:09 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite ''
00:11:48.506   05:57:09 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:48.507   05:57:09 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:48.507   05:57:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:48.765  ************************************
00:11:48.765  START TEST bdev_stat
00:11:48.765  ************************************
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@1129 -- # stat_test_suite ''
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=83031
00:11:48.765  Process Bdev IO statistics testing pid: 83031
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 83031'
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 83031
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # '[' -z 83031 ']'
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:48.765  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:48.765   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:48.765  [2024-11-18 05:57:09.537596] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:48.765  [2024-11-18 05:57:09.537771] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83031 ]
00:11:48.765  [2024-11-18 05:57:09.694115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:48.765  [2024-11-18 05:57:09.720883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:48.765  [2024-11-18 05:57:09.720915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@868 -- # return 0
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:49.024  Malloc_STAT
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT
00:11:49.024   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_STAT
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # local i
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:49.025  [
00:11:49.025  {
00:11:49.025  "name": "Malloc_STAT",
00:11:49.025  "aliases": [
00:11:49.025  "268cf2c2-4e61-4315-96f1-bf99e6a61bb4"
00:11:49.025  ],
00:11:49.025  "product_name": "Malloc disk",
00:11:49.025  "block_size": 512,
00:11:49.025  "num_blocks": 262144,
00:11:49.025  "uuid": "268cf2c2-4e61-4315-96f1-bf99e6a61bb4",
00:11:49.025  "assigned_rate_limits": {
00:11:49.025  "rw_ios_per_sec": 0,
00:11:49.025  "rw_mbytes_per_sec": 0,
00:11:49.025  "r_mbytes_per_sec": 0,
00:11:49.025  "w_mbytes_per_sec": 0
00:11:49.025  },
00:11:49.025  "claimed": false,
00:11:49.025  "zoned": false,
00:11:49.025  "supported_io_types": {
00:11:49.025  "read": true,
00:11:49.025  "write": true,
00:11:49.025  "unmap": true,
00:11:49.025  "flush": true,
00:11:49.025  "reset": true,
00:11:49.025  "nvme_admin": false,
00:11:49.025  "nvme_io": false,
00:11:49.025  "nvme_io_md": false,
00:11:49.025  "write_zeroes": true,
00:11:49.025  "zcopy": true,
00:11:49.025  "get_zone_info": false,
00:11:49.025  "zone_management": false,
00:11:49.025  "zone_append": false,
00:11:49.025  "compare": false,
00:11:49.025  "compare_and_write": false,
00:11:49.025  "abort": true,
00:11:49.025  "seek_hole": false,
00:11:49.025  "seek_data": false,
00:11:49.025  "copy": true,
00:11:49.025  "nvme_iov_md": false
00:11:49.025  },
00:11:49.025  "memory_domains": [
00:11:49.025  {
00:11:49.025  "dma_device_id": "system",
00:11:49.025  "dma_device_type": 1
00:11:49.025  },
00:11:49.025  {
00:11:49.025  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:49.025  "dma_device_type": 2
00:11:49.025  }
00:11:49.025  ],
00:11:49.025  "driver_specific": {}
00:11:49.025  }
00:11:49.025  ]
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- common/autotest_common.sh@911 -- # return 0
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2
00:11:49.025   05:57:09 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:49.284  Running I/O for 10 seconds...
00:11:51.218     107008.00 IOPS,   418.00 MiB/s
[2024-11-18T05:57:12.196Z]  05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{
00:11:51.218  "tick_rate": 2200000000,
00:11:51.218  "ticks": 1410142385538,
00:11:51.218  "bdevs": [
00:11:51.218  {
00:11:51.218  "name": "Malloc_STAT",
00:11:51.218  "bytes_read": 816878080,
00:11:51.218  "num_read_ops": 199427,
00:11:51.218  "bytes_written": 0,
00:11:51.218  "num_write_ops": 0,
00:11:51.218  "bytes_unmapped": 0,
00:11:51.218  "num_unmap_ops": 0,
00:11:51.218  "bytes_copied": 0,
00:11:51.218  "num_copy_ops": 0,
00:11:51.218  "read_latency_ticks": 2090387185326,
00:11:51.218  "max_read_latency_ticks": 14128686,
00:11:51.218  "min_read_latency_ticks": 391688,
00:11:51.218  "write_latency_ticks": 0,
00:11:51.218  "max_write_latency_ticks": 0,
00:11:51.218  "min_write_latency_ticks": 0,
00:11:51.218  "unmap_latency_ticks": 0,
00:11:51.218  "max_unmap_latency_ticks": 0,
00:11:51.218  "min_unmap_latency_ticks": 0,
00:11:51.218  "copy_latency_ticks": 0,
00:11:51.218  "max_copy_latency_ticks": 0,
00:11:51.218  "min_copy_latency_ticks": 0,
00:11:51.218  "io_error": {}
00:11:51.218  }
00:11:51.218  ]
00:11:51.218  }'
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=199427
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{
00:11:51.218  "tick_rate": 2200000000,
00:11:51.218  "ticks": 1410206679930,
00:11:51.218  "name": "Malloc_STAT",
00:11:51.218  "channels": [
00:11:51.218  {
00:11:51.218  "thread_id": 2,
00:11:51.218  "bytes_read": 412090368,
00:11:51.218  "num_read_ops": 100608,
00:11:51.218  "bytes_written": 0,
00:11:51.218  "num_write_ops": 0,
00:11:51.218  "bytes_unmapped": 0,
00:11:51.218  "num_unmap_ops": 0,
00:11:51.218  "bytes_copied": 0,
00:11:51.218  "num_copy_ops": 0,
00:11:51.218  "read_latency_ticks": 1060687180574,
00:11:51.218  "max_read_latency_ticks": 16634025,
00:11:51.218  "min_read_latency_ticks": 7127298,
00:11:51.218  "write_latency_ticks": 0,
00:11:51.218  "max_write_latency_ticks": 0,
00:11:51.218  "min_write_latency_ticks": 0,
00:11:51.218  "unmap_latency_ticks": 0,
00:11:51.218  "max_unmap_latency_ticks": 0,
00:11:51.218  "min_unmap_latency_ticks": 0,
00:11:51.218  "copy_latency_ticks": 0,
00:11:51.218  "max_copy_latency_ticks": 0,
00:11:51.218  "min_copy_latency_ticks": 0
00:11:51.218  },
00:11:51.218  {
00:11:51.218  "thread_id": 3,
00:11:51.218  "bytes_read": 416284672,
00:11:51.218  "num_read_ops": 101632,
00:11:51.218  "bytes_written": 0,
00:11:51.218  "num_write_ops": 0,
00:11:51.218  "bytes_unmapped": 0,
00:11:51.218  "num_unmap_ops": 0,
00:11:51.218  "bytes_copied": 0,
00:11:51.218  "num_copy_ops": 0,
00:11:51.218  "read_latency_ticks": 1063044574451,
00:11:51.218  "max_read_latency_ticks": 14128686,
00:11:51.218  "min_read_latency_ticks": 7458458,
00:11:51.218  "write_latency_ticks": 0,
00:11:51.218  "max_write_latency_ticks": 0,
00:11:51.218  "min_write_latency_ticks": 0,
00:11:51.218  "unmap_latency_ticks": 0,
00:11:51.218  "max_unmap_latency_ticks": 0,
00:11:51.218  "min_unmap_latency_ticks": 0,
00:11:51.218  "copy_latency_ticks": 0,
00:11:51.218  "max_copy_latency_ticks": 0,
00:11:51.218  "min_copy_latency_ticks": 0
00:11:51.218  }
00:11:51.218  ]
00:11:51.218  }'
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=100608
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=100608
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=101632
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=202240
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{
00:11:51.218  "tick_rate": 2200000000,
00:11:51.218  "ticks": 1410310973520,
00:11:51.218  "bdevs": [
00:11:51.218  {
00:11:51.218  "name": "Malloc_STAT",
00:11:51.218  "bytes_read": 847286784,
00:11:51.218  "num_read_ops": 206851,
00:11:51.218  "bytes_written": 0,
00:11:51.218  "num_write_ops": 0,
00:11:51.218  "bytes_unmapped": 0,
00:11:51.218  "num_unmap_ops": 0,
00:11:51.218  "bytes_copied": 0,
00:11:51.218  "num_copy_ops": 0,
00:11:51.218  "read_latency_ticks": 2176608140323,
00:11:51.218  "max_read_latency_ticks": 16634025,
00:11:51.218  "min_read_latency_ticks": 391688,
00:11:51.218  "write_latency_ticks": 0,
00:11:51.218  "max_write_latency_ticks": 0,
00:11:51.218  "min_write_latency_ticks": 0,
00:11:51.218  "unmap_latency_ticks": 0,
00:11:51.218  "max_unmap_latency_ticks": 0,
00:11:51.218  "min_unmap_latency_ticks": 0,
00:11:51.218  "copy_latency_ticks": 0,
00:11:51.218  "max_copy_latency_ticks": 0,
00:11:51.218  "min_copy_latency_ticks": 0,
00:11:51.218  "io_error": {}
00:11:51.218  }
00:11:51.218  ]
00:11:51.218  }'
00:11:51.218    05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=206851
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 202240 -lt 199427 ']'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 202240 -gt 206851 ']'
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:51.218   05:57:11 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:51.218  
00:11:51.218                                                                                                  Latency(us)
00:11:51.218  
[2024-11-18T05:57:12.196Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:51.218  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:11:51.218  	 Malloc_STAT         :       1.97   53107.67     207.45       0.00     0.00    4807.47    1355.40    7566.43
00:11:51.218  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:11:51.218  	 Malloc_STAT         :       1.97   53602.81     209.39       0.00     0.00    4763.49     983.04    6434.44
00:11:51.218  
[2024-11-18T05:57:12.196Z]  ===================================================================================================================
00:11:51.218  
[2024-11-18T05:57:12.196Z]  Total                       :             106710.48     416.84       0.00     0.00    4785.38     983.04    7566.43
00:11:51.218  {
00:11:51.218    "results": [
00:11:51.219      {
00:11:51.219        "job": "Malloc_STAT",
00:11:51.219        "core_mask": "0x1",
00:11:51.219        "workload": "randread",
00:11:51.219        "status": "finished",
00:11:51.219        "queue_depth": 256,
00:11:51.219        "io_size": 4096,
00:11:51.219        "runtime": 1.971542,
00:11:51.219        "iops": 53107.669022521455,
00:11:51.219        "mibps": 207.45183211922443,
00:11:51.219        "io_failed": 0,
00:11:51.219        "io_timeout": 0,
00:11:51.219        "avg_latency_us": 4807.474834407645,
00:11:51.219        "min_latency_us": 1355.4036363636365,
00:11:51.219        "max_latency_us": 7566.4290909090905
00:11:51.219      },
00:11:51.219      {
00:11:51.219        "job": "Malloc_STAT",
00:11:51.219        "core_mask": "0x2",
00:11:51.219        "workload": "randread",
00:11:51.219        "status": "finished",
00:11:51.219        "queue_depth": 256,
00:11:51.219        "io_size": 4096,
00:11:51.219        "runtime": 1.972434,
00:11:51.219        "iops": 53602.807495713416,
00:11:51.219        "mibps": 209.38596678013053,
00:11:51.219        "io_failed": 0,
00:11:51.219        "io_timeout": 0,
00:11:51.219        "avg_latency_us": 4763.491930442438,
00:11:51.219        "min_latency_us": 983.04,
00:11:51.219        "max_latency_us": 6434.443636363636
00:11:51.219      }
00:11:51.219    ],
00:11:51.219    "core_count": 2
00:11:51.219  }
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 83031
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' -z 83031 ']'
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # kill -0 83031
00:11:51.219    05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # uname
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:11:51.219    05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83031
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:11:51.219  killing process with pid 83031
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83031'
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # kill 83031
00:11:51.219  Received shutdown signal, test time was about 2.022865 seconds
00:11:51.219  
00:11:51.219                                                                                                  Latency(us)
00:11:51.219  
[2024-11-18T05:57:12.197Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:51.219  
[2024-11-18T05:57:12.197Z]  ===================================================================================================================
00:11:51.219  
[2024-11-18T05:57:12.197Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:11:51.219   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@978 -- # wait 83031
00:11:51.477   05:57:12 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT
00:11:51.478  
00:11:51.478  real	0m2.739s
00:11:51.478  user	0m5.334s
00:11:51.478  sys	0m0.301s
00:11:51.478   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:51.478  ************************************
00:11:51.478  END TEST bdev_stat
00:11:51.478   05:57:12 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:11:51.478  ************************************
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]]
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]]
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]]
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]]
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]]
00:11:51.478   05:57:12 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]]
00:11:51.478  
00:11:51.478  real	1m50.073s
00:11:51.478  user	5m9.481s
00:11:51.478  sys	0m21.650s
00:11:51.478   05:57:12 blockdev_general -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:51.478  ************************************
00:11:51.478  END TEST blockdev_general
00:11:51.478  ************************************
00:11:51.478   05:57:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:51.478   05:57:12  -- spdk/autotest.sh@181 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:11:51.478   05:57:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:11:51.478   05:57:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:51.478   05:57:12  -- common/autotest_common.sh@10 -- # set +x
00:11:51.478  ************************************
00:11:51.478  START TEST bdevperf_config
00:11:51.478  ************************************
00:11:51.478   05:57:12 bdevperf_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:11:51.478  * Looking for test storage...
00:11:51.478  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:11:51.478    05:57:12 bdevperf_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:11:51.478     05:57:12 bdevperf_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:11:51.478     05:57:12 bdevperf_config -- common/autotest_common.sh@1693 -- # lcov --version
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@336 -- # IFS=.-:
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@336 -- # read -ra ver1
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@337 -- # IFS=.-:
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@337 -- # read -ra ver2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@338 -- # local 'op=<'
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@340 -- # ver1_l=2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@341 -- # ver2_l=1
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@344 -- # case "$op" in
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@345 -- # : 1
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@365 -- # decimal 1
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@353 -- # local d=1
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@355 -- # echo 1
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@365 -- # ver1[v]=1
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@366 -- # decimal 2
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@353 -- # local d=2
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:11:51.737     05:57:12 bdevperf_config -- scripts/common.sh@355 -- # echo 2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@366 -- # ver2[v]=2
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:11:51.737    05:57:12 bdevperf_config -- scripts/common.sh@368 -- # return 0
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:11:51.737  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.737  		--rc genhtml_branch_coverage=1
00:11:51.737  		--rc genhtml_function_coverage=1
00:11:51.737  		--rc genhtml_legend=1
00:11:51.737  		--rc geninfo_all_blocks=1
00:11:51.737  		--rc geninfo_unexecuted_blocks=1
00:11:51.737  		
00:11:51.737  		'
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:11:51.737  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.737  		--rc genhtml_branch_coverage=1
00:11:51.737  		--rc genhtml_function_coverage=1
00:11:51.737  		--rc genhtml_legend=1
00:11:51.737  		--rc geninfo_all_blocks=1
00:11:51.737  		--rc geninfo_unexecuted_blocks=1
00:11:51.737  		
00:11:51.737  		'
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:11:51.737  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.737  		--rc genhtml_branch_coverage=1
00:11:51.737  		--rc genhtml_function_coverage=1
00:11:51.737  		--rc genhtml_legend=1
00:11:51.737  		--rc geninfo_all_blocks=1
00:11:51.737  		--rc geninfo_unexecuted_blocks=1
00:11:51.737  		
00:11:51.737  		'
00:11:51.737    05:57:12 bdevperf_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:11:51.737  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:11:51.737  		--rc genhtml_branch_coverage=1
00:11:51.737  		--rc genhtml_function_coverage=1
00:11:51.737  		--rc genhtml_legend=1
00:11:51.737  		--rc geninfo_all_blocks=1
00:11:51.737  		--rc geninfo_unexecuted_blocks=1
00:11:51.737  		
00:11:51.737  		'
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:11:51.737    05:57:12 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:11:51.737  
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:11:51.737  
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:11:51.737  
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:11:51.737   05:57:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:11:51.737  
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:11:51.738  
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:51.738   05:57:12 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:51.738    05:57:12 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:54.274   05:57:15 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-18 05:57:12.570108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:54.274  [2024-11-18 05:57:12.570270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83159 ]
00:11:54.274  Using job config with 4 jobs
00:11:54.274  [2024-11-18 05:57:12.714026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:54.274  [2024-11-18 05:57:12.734812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:54.274  cpumask for '\''job0'\'' is too big
00:11:54.274  cpumask for '\''job1'\'' is too big
00:11:54.274  cpumask for '\''job2'\'' is too big
00:11:54.274  cpumask for '\''job3'\'' is too big
00:11:54.274  Running I/O for 2 seconds...
00:11:54.274     104448.00 IOPS,   102.00 MiB/s
[2024-11-18T05:57:15.252Z]    103936.00 IOPS,   101.50 MiB/s
00:11:54.274                                                                                                  Latency(us)
00:11:54.274  
[2024-11-18T05:57:15.252Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25979.36      25.37       0.00     0.00    9843.91    2144.81   18230.92
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25960.15      25.35       0.00     0.00    9826.22    2055.45   16324.42
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25940.62      25.33       0.00     0.00    9809.00    1995.87   14477.50
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25921.63      25.31       0.00     0.00    9793.20    2055.45   15490.33
00:11:54.274  
[2024-11-18T05:57:15.252Z]  ===================================================================================================================
00:11:54.274  
[2024-11-18T05:57:15.252Z]  Total                       :             103801.76     101.37       0.00     0.00    9818.08    1995.87   18230.92'
00:11:54.274    05:57:15 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-18 05:57:12.570108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:54.274  [2024-11-18 05:57:12.570270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83159 ]
00:11:54.274  Using job config with 4 jobs
00:11:54.274  [2024-11-18 05:57:12.714026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:54.274  [2024-11-18 05:57:12.734812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:54.274  cpumask for '\''job0'\'' is too big
00:11:54.274  cpumask for '\''job1'\'' is too big
00:11:54.274  cpumask for '\''job2'\'' is too big
00:11:54.274  cpumask for '\''job3'\'' is too big
00:11:54.274  Running I/O for 2 seconds...
00:11:54.274     104448.00 IOPS,   102.00 MiB/s
[2024-11-18T05:57:15.252Z]    103936.00 IOPS,   101.50 MiB/s
00:11:54.274                                                                                                  Latency(us)
00:11:54.274  
[2024-11-18T05:57:15.252Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25979.36      25.37       0.00     0.00    9843.91    2144.81   18230.92
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25960.15      25.35       0.00     0.00    9826.22    2055.45   16324.42
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25940.62      25.33       0.00     0.00    9809.00    1995.87   14477.50
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25921.63      25.31       0.00     0.00    9793.20    2055.45   15490.33
00:11:54.274  
[2024-11-18T05:57:15.252Z]  ===================================================================================================================
00:11:54.274  
[2024-11-18T05:57:15.252Z]  Total                       :             103801.76     101.37       0.00     0.00    9818.08    1995.87   18230.92'
00:11:54.274    05:57:15 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:57:12.570108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:54.274  [2024-11-18 05:57:12.570270] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83159 ]
00:11:54.274  Using job config with 4 jobs
00:11:54.274  [2024-11-18 05:57:12.714026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:54.274  [2024-11-18 05:57:12.734812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:54.274  cpumask for '\''job0'\'' is too big
00:11:54.274  cpumask for '\''job1'\'' is too big
00:11:54.274  cpumask for '\''job2'\'' is too big
00:11:54.274  cpumask for '\''job3'\'' is too big
00:11:54.274  Running I/O for 2 seconds...
00:11:54.274     104448.00 IOPS,   102.00 MiB/s
[2024-11-18T05:57:15.252Z]    103936.00 IOPS,   101.50 MiB/s
00:11:54.274                                                                                                  Latency(us)
00:11:54.274  
[2024-11-18T05:57:15.252Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25979.36      25.37       0.00     0.00    9843.91    2144.81   18230.92
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25960.15      25.35       0.00     0.00    9826.22    2055.45   16324.42
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25940.62      25.33       0.00     0.00    9809.00    1995.87   14477.50
00:11:54.274  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:54.274  	 Malloc0             :       2.02   25921.63      25.31       0.00     0.00    9793.20    2055.45   15490.33
00:11:54.274  
[2024-11-18T05:57:15.253Z]  ===================================================================================================================
00:11:54.275  
[2024-11-18T05:57:15.253Z]  Total                       :             103801.76     101.37       0.00     0.00    9818.08    1995.87   18230.92'
00:11:54.275    05:57:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:11:54.275    05:57:15 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:11:54.275   05:57:15 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:11:54.275    05:57:15 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:54.275  [2024-11-18 05:57:15.125097] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:54.275  [2024-11-18 05:57:15.125303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83189 ]
00:11:54.533  [2024-11-18 05:57:15.275714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:54.533  [2024-11-18 05:57:15.300603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:54.533  cpumask for 'job0' is too big
00:11:54.534  cpumask for 'job1' is too big
00:11:54.534  cpumask for 'job2' is too big
00:11:54.534  cpumask for 'job3' is too big
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:11:57.068  Running I/O for 2 seconds...
00:11:57.068      99328.00 IOPS,    97.00 MiB/s
[2024-11-18T05:57:18.046Z]    101376.00 IOPS,    99.00 MiB/s
00:11:57.068                                                                                                  Latency(us)
00:11:57.068  
[2024-11-18T05:57:18.046Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:57.068  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:57.068  	 Malloc0             :       2.01   25312.96      24.72       0.00     0.00   10104.36    1779.90   20137.43
00:11:57.068  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:57.068  	 Malloc0             :       2.03   25272.69      24.68       0.00     0.00   10097.79    1683.08   18945.86
00:11:57.068  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:57.068  	 Malloc0             :       2.03   25253.93      24.66       0.00     0.00   10083.69    1630.95   21567.30
00:11:57.068  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:11:57.068  	 Malloc0             :       2.03   25235.25      24.64       0.00     0.00   10071.35    1630.95   23235.49
00:11:57.068  
[2024-11-18T05:57:18.046Z]  ===================================================================================================================
00:11:57.068  
[2024-11-18T05:57:18.046Z]  Total                       :             101074.83      98.71       0.00     0.00   10089.28    1630.95   23235.49'
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:57.068  
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:57.068  
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:11:57.068  
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:57.068   05:57:17 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:57.068    05:57:17 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:59.600   05:57:20 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-18 05:57:17.720247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:59.600  [2024-11-18 05:57:17.720445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83230 ]
00:11:59.600  Using job config with 3 jobs
00:11:59.600  [2024-11-18 05:57:17.873183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:59.600  [2024-11-18 05:57:17.893074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:59.600  cpumask for '\''job0'\'' is too big
00:11:59.600  cpumask for '\''job1'\'' is too big
00:11:59.600  cpumask for '\''job2'\'' is too big
00:11:59.600  Running I/O for 2 seconds...
00:11:59.600      99072.00 IOPS,    96.75 MiB/s
[2024-11-18T05:57:20.578Z]     99072.00 IOPS,    96.75 MiB/s
00:11:59.600                                                                                                  Latency(us)
00:11:59.600  
[2024-11-18T05:57:20.578Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:59.600  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.600  	 Malloc0             :       2.01   32932.03      32.16       0.00     0.00    7764.65    1966.08   12034.79
00:11:59.600  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32872.88      32.10       0.00     0.00    7760.39    1787.35   12034.79
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32892.75      32.12       0.00     0.00    7739.59    1005.38   13345.51
00:11:59.601  
[2024-11-18T05:57:20.579Z]  ===================================================================================================================
00:11:59.601  
[2024-11-18T05:57:20.579Z]  Total                       :              98697.66      96.38       0.00     0.00    7754.86    1005.38   13345.51'
00:11:59.601    05:57:20 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-18 05:57:17.720247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:59.601  [2024-11-18 05:57:17.720445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83230 ]
00:11:59.601  Using job config with 3 jobs
00:11:59.601  [2024-11-18 05:57:17.873183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:59.601  [2024-11-18 05:57:17.893074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:59.601  cpumask for '\''job0'\'' is too big
00:11:59.601  cpumask for '\''job1'\'' is too big
00:11:59.601  cpumask for '\''job2'\'' is too big
00:11:59.601  Running I/O for 2 seconds...
00:11:59.601      99072.00 IOPS,    96.75 MiB/s
[2024-11-18T05:57:20.579Z]     99072.00 IOPS,    96.75 MiB/s
00:11:59.601                                                                                                  Latency(us)
00:11:59.601  
[2024-11-18T05:57:20.579Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.01   32932.03      32.16       0.00     0.00    7764.65    1966.08   12034.79
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32872.88      32.10       0.00     0.00    7760.39    1787.35   12034.79
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32892.75      32.12       0.00     0.00    7739.59    1005.38   13345.51
00:11:59.601  
[2024-11-18T05:57:20.579Z]  ===================================================================================================================
00:11:59.601  
[2024-11-18T05:57:20.579Z]  Total                       :              98697.66      96.38       0.00     0.00    7754.86    1005.38   13345.51'
00:11:59.601    05:57:20 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:57:17.720247] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:11:59.601  [2024-11-18 05:57:17.720445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83230 ]
00:11:59.601  Using job config with 3 jobs
00:11:59.601  [2024-11-18 05:57:17.873183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:59.601  [2024-11-18 05:57:17.893074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:59.601  cpumask for '\''job0'\'' is too big
00:11:59.601  cpumask for '\''job1'\'' is too big
00:11:59.601  cpumask for '\''job2'\'' is too big
00:11:59.601  Running I/O for 2 seconds...
00:11:59.601      99072.00 IOPS,    96.75 MiB/s
[2024-11-18T05:57:20.579Z]     99072.00 IOPS,    96.75 MiB/s
00:11:59.601                                                                                                  Latency(us)
00:11:59.601  
[2024-11-18T05:57:20.579Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.01   32932.03      32.16       0.00     0.00    7764.65    1966.08   12034.79
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32872.88      32.10       0.00     0.00    7760.39    1787.35   12034.79
00:11:59.601  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:11:59.601  	 Malloc0             :       2.02   32892.75      32.12       0.00     0.00    7739.59    1005.38   13345.51
00:11:59.601  
[2024-11-18T05:57:20.579Z]  ===================================================================================================================
00:11:59.601  
[2024-11-18T05:57:20.579Z]  Total                       :              98697.66      96.38       0.00     0.00    7754.86    1005.38   13345.51'
00:11:59.601    05:57:20 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:11:59.601    05:57:20 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:11:59.601  
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:11:59.601  
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:11:59.601  
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:11:59.601  
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:11:59.601  
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:11:59.601   05:57:20 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:11:59.601    05:57:20 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:02.136   05:57:22 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-18 05:57:20.305314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:02.136  [2024-11-18 05:57:20.305545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ]
00:12:02.136  Using job config with 4 jobs
00:12:02.136  [2024-11-18 05:57:20.460675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:02.136  [2024-11-18 05:57:20.481883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:02.136  cpumask for '\''job0'\'' is too big
00:12:02.136  cpumask for '\''job1'\'' is too big
00:12:02.136  cpumask for '\''job2'\'' is too big
00:12:02.136  cpumask for '\''job3'\'' is too big
00:12:02.136  Running I/O for 2 seconds...
00:12:02.136     106496.00 IOPS,   104.00 MiB/s
[2024-11-18T05:57:23.114Z]    102912.00 IOPS,   100.50 MiB/s
00:12:02.136                                                                                                  Latency(us)
00:12:02.136  
[2024-11-18T05:57:23.114Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:02.136  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc0             :       2.04   12671.12      12.37       0.00     0.00   20188.12    3842.79   33125.47
00:12:02.136  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc1             :       2.04   12661.25      12.36       0.00     0.00   20184.69    4825.83   32887.16
00:12:02.136  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc0             :       2.05   12641.25      12.34       0.00     0.00   20143.85    3872.58   28478.37
00:12:02.136  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc1             :       2.05   12631.58      12.34       0.00     0.00   20136.48    5064.15   28001.75
00:12:02.136  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc0             :       2.05   12621.67      12.33       0.00     0.00   20082.04    4021.53   25618.62
00:12:02.136  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc1             :       2.05   12611.99      12.32       0.00     0.00   20076.29    4736.47   25856.93
00:12:02.136  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc0             :       2.05   12602.36      12.31       0.00     0.00   20020.87    3872.58   26095.24
00:12:02.136  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.136  	 Malloc1             :       2.05   12592.42      12.30       0.00     0.00   20015.08    4349.21   26095.24
00:12:02.136  
[2024-11-18T05:57:23.114Z]  ===================================================================================================================
00:12:02.136  
[2024-11-18T05:57:23.114Z]  Total                       :             101033.65      98.67       0.00     0.00   20105.93    3842.79   33125.47'
00:12:02.137    05:57:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-18 05:57:20.305314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:02.137  [2024-11-18 05:57:20.305545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ]
00:12:02.137  Using job config with 4 jobs
00:12:02.137  [2024-11-18 05:57:20.460675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:02.137  [2024-11-18 05:57:20.481883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:02.137  cpumask for '\''job0'\'' is too big
00:12:02.137  cpumask for '\''job1'\'' is too big
00:12:02.137  cpumask for '\''job2'\'' is too big
00:12:02.137  cpumask for '\''job3'\'' is too big
00:12:02.137  Running I/O for 2 seconds...
00:12:02.137     106496.00 IOPS,   104.00 MiB/s
[2024-11-18T05:57:23.115Z]    102912.00 IOPS,   100.50 MiB/s
00:12:02.137                                                                                                  Latency(us)
00:12:02.137  
[2024-11-18T05:57:23.115Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.04   12671.12      12.37       0.00     0.00   20188.12    3842.79   33125.47
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.04   12661.25      12.36       0.00     0.00   20184.69    4825.83   32887.16
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12641.25      12.34       0.00     0.00   20143.85    3872.58   28478.37
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12631.58      12.34       0.00     0.00   20136.48    5064.15   28001.75
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12621.67      12.33       0.00     0.00   20082.04    4021.53   25618.62
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12611.99      12.32       0.00     0.00   20076.29    4736.47   25856.93
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12602.36      12.31       0.00     0.00   20020.87    3872.58   26095.24
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12592.42      12.30       0.00     0.00   20015.08    4349.21   26095.24
00:12:02.137  
[2024-11-18T05:57:23.115Z]  ===================================================================================================================
00:12:02.137  
[2024-11-18T05:57:23.115Z]  Total                       :             101033.65      98.67       0.00     0.00   20105.93    3842.79   33125.47'
00:12:02.137    05:57:22 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-18 05:57:20.305314] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:02.137  [2024-11-18 05:57:20.305545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83267 ]
00:12:02.137  Using job config with 4 jobs
00:12:02.137  [2024-11-18 05:57:20.460675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:02.137  [2024-11-18 05:57:20.481883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:02.137  cpumask for '\''job0'\'' is too big
00:12:02.137  cpumask for '\''job1'\'' is too big
00:12:02.137  cpumask for '\''job2'\'' is too big
00:12:02.137  cpumask for '\''job3'\'' is too big
00:12:02.137  Running I/O for 2 seconds...
00:12:02.137     106496.00 IOPS,   104.00 MiB/s
[2024-11-18T05:57:23.115Z]    102912.00 IOPS,   100.50 MiB/s
00:12:02.137                                                                                                  Latency(us)
00:12:02.137  
[2024-11-18T05:57:23.115Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.04   12671.12      12.37       0.00     0.00   20188.12    3842.79   33125.47
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.04   12661.25      12.36       0.00     0.00   20184.69    4825.83   32887.16
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12641.25      12.34       0.00     0.00   20143.85    3872.58   28478.37
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12631.58      12.34       0.00     0.00   20136.48    5064.15   28001.75
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12621.67      12.33       0.00     0.00   20082.04    4021.53   25618.62
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12611.99      12.32       0.00     0.00   20076.29    4736.47   25856.93
00:12:02.137  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc0             :       2.05   12602.36      12.31       0.00     0.00   20020.87    3872.58   26095.24
00:12:02.137  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:02.137  	 Malloc1             :       2.05   12592.42      12.30       0.00     0.00   20015.08    4349.21   26095.24
00:12:02.137  
[2024-11-18T05:57:23.115Z]  ===================================================================================================================
00:12:02.137  
[2024-11-18T05:57:23.115Z]  Total                       :             101033.65      98.67       0.00     0.00   20105.93    3842.79   33125.47'
00:12:02.137    05:57:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:12:02.137    05:57:22 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:12:02.137   05:57:22 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:12:02.137   05:57:22 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup
00:12:02.137   05:57:22 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:02.137   05:57:22 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:12:02.137  ************************************
00:12:02.137  END TEST bdevperf_config
00:12:02.137  ************************************
00:12:02.137  
00:12:02.137  real	0m10.552s
00:12:02.137  user	0m9.312s
00:12:02.137  sys	0m0.798s
00:12:02.137   05:57:22 bdevperf_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:02.137   05:57:22 bdevperf_config -- common/autotest_common.sh@10 -- # set +x
00:12:02.137    05:57:22  -- spdk/autotest.sh@182 -- # uname -s
00:12:02.137   05:57:22  -- spdk/autotest.sh@182 -- # [[ Linux == Linux ]]
00:12:02.137   05:57:22  -- spdk/autotest.sh@183 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:02.137   05:57:22  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:02.137   05:57:22  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:02.137   05:57:22  -- common/autotest_common.sh@10 -- # set +x
00:12:02.137  ************************************
00:12:02.137  START TEST reactor_set_interrupt
00:12:02.137  ************************************
00:12:02.137   05:57:22 reactor_set_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:02.137  * Looking for test storage...
00:12:02.137  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.137    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:02.137     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version
00:12:02.137     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:02.399     05:57:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:02.399    05:57:23 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:02.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.399  		--rc genhtml_branch_coverage=1
00:12:02.399  		--rc genhtml_function_coverage=1
00:12:02.399  		--rc genhtml_legend=1
00:12:02.399  		--rc geninfo_all_blocks=1
00:12:02.399  		--rc geninfo_unexecuted_blocks=1
00:12:02.399  		
00:12:02.399  		'
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:02.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.399  		--rc genhtml_branch_coverage=1
00:12:02.399  		--rc genhtml_function_coverage=1
00:12:02.399  		--rc genhtml_legend=1
00:12:02.399  		--rc geninfo_all_blocks=1
00:12:02.399  		--rc geninfo_unexecuted_blocks=1
00:12:02.399  		
00:12:02.399  		'
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:02.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.399  		--rc genhtml_branch_coverage=1
00:12:02.399  		--rc genhtml_function_coverage=1
00:12:02.399  		--rc genhtml_legend=1
00:12:02.399  		--rc geninfo_all_blocks=1
00:12:02.399  		--rc geninfo_unexecuted_blocks=1
00:12:02.399  		
00:12:02.399  		'
00:12:02.399    05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:02.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.399  		--rc genhtml_branch_coverage=1
00:12:02.399  		--rc genhtml_function_coverage=1
00:12:02.399  		--rc genhtml_legend=1
00:12:02.399  		--rc geninfo_all_blocks=1
00:12:02.399  		--rc geninfo_unexecuted_blocks=1
00:12:02.399  		
00:12:02.399  		'
00:12:02.399   05:57:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:12:02.399      05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:02.399     05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.399    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.399     05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:12:02.399    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:02.399    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:12:02.399     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:02.399      05:57:23 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:02.400      05:57:23 reactor_set_interrupt -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:02.400     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:02.400        05:57:23 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:02.400       05:57:23 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:12:02.400  #define SPDK_CONFIG_H
00:12:02.400  #define SPDK_CONFIG_AIO_FSDEV 1
00:12:02.400  #define SPDK_CONFIG_APPS 1
00:12:02.400  #define SPDK_CONFIG_ARCH native
00:12:02.400  #define SPDK_CONFIG_ASAN 1
00:12:02.400  #undef SPDK_CONFIG_AVAHI
00:12:02.400  #undef SPDK_CONFIG_CET
00:12:02.400  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:12:02.400  #define SPDK_CONFIG_COVERAGE 1
00:12:02.400  #define SPDK_CONFIG_CROSS_PREFIX 
00:12:02.400  #undef SPDK_CONFIG_CRYPTO
00:12:02.400  #undef SPDK_CONFIG_CRYPTO_MLX5
00:12:02.400  #undef SPDK_CONFIG_CUSTOMOCF
00:12:02.400  #undef SPDK_CONFIG_DAOS
00:12:02.400  #define SPDK_CONFIG_DAOS_DIR 
00:12:02.400  #define SPDK_CONFIG_DEBUG 1
00:12:02.400  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:12:02.400  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:12:02.400  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:12:02.400  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:12:02.400  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:12:02.400  #undef SPDK_CONFIG_DPDK_UADK
00:12:02.400  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:02.400  #define SPDK_CONFIG_EXAMPLES 1
00:12:02.400  #undef SPDK_CONFIG_FC
00:12:02.400  #define SPDK_CONFIG_FC_PATH 
00:12:02.400  #define SPDK_CONFIG_FIO_PLUGIN 1
00:12:02.400  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:12:02.400  #define SPDK_CONFIG_FSDEV 1
00:12:02.400  #undef SPDK_CONFIG_FUSE
00:12:02.400  #undef SPDK_CONFIG_FUZZER
00:12:02.400  #define SPDK_CONFIG_FUZZER_LIB 
00:12:02.400  #undef SPDK_CONFIG_GOLANG
00:12:02.400  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:12:02.400  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:12:02.400  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:12:02.400  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:12:02.400  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:12:02.400  #undef SPDK_CONFIG_HAVE_LIBBSD
00:12:02.400  #undef SPDK_CONFIG_HAVE_LZ4
00:12:02.400  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:12:02.400  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:12:02.400  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:12:02.400  #define SPDK_CONFIG_IDXD 1
00:12:02.400  #define SPDK_CONFIG_IDXD_KERNEL 1
00:12:02.400  #undef SPDK_CONFIG_IPSEC_MB
00:12:02.400  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:12:02.400  #define SPDK_CONFIG_ISAL 1
00:12:02.400  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:12:02.400  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:12:02.400  #define SPDK_CONFIG_LIBDIR 
00:12:02.400  #undef SPDK_CONFIG_LTO
00:12:02.400  #define SPDK_CONFIG_MAX_LCORES 128
00:12:02.400  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:12:02.400  #define SPDK_CONFIG_NVME_CUSE 1
00:12:02.400  #undef SPDK_CONFIG_OCF
00:12:02.400  #define SPDK_CONFIG_OCF_PATH 
00:12:02.400  #define SPDK_CONFIG_OPENSSL_PATH 
00:12:02.400  #undef SPDK_CONFIG_PGO_CAPTURE
00:12:02.400  #define SPDK_CONFIG_PGO_DIR 
00:12:02.400  #undef SPDK_CONFIG_PGO_USE
00:12:02.400  #define SPDK_CONFIG_PREFIX /usr/local
00:12:02.400  #undef SPDK_CONFIG_RAID5F
00:12:02.400  #undef SPDK_CONFIG_RBD
00:12:02.400  #define SPDK_CONFIG_RDMA 1
00:12:02.400  #define SPDK_CONFIG_RDMA_PROV verbs
00:12:02.400  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:12:02.400  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:12:02.400  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:12:02.400  #undef SPDK_CONFIG_SHARED
00:12:02.400  #undef SPDK_CONFIG_SMA
00:12:02.400  #define SPDK_CONFIG_TESTS 1
00:12:02.400  #undef SPDK_CONFIG_TSAN
00:12:02.400  #define SPDK_CONFIG_UBLK 1
00:12:02.400  #define SPDK_CONFIG_UBSAN 1
00:12:02.400  #define SPDK_CONFIG_UNIT_TESTS 1
00:12:02.400  #undef SPDK_CONFIG_URING
00:12:02.400  #define SPDK_CONFIG_URING_PATH 
00:12:02.400  #undef SPDK_CONFIG_URING_ZNS
00:12:02.400  #undef SPDK_CONFIG_USDT
00:12:02.400  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:12:02.400  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:12:02.400  #undef SPDK_CONFIG_VFIO_USER
00:12:02.400  #define SPDK_CONFIG_VFIO_USER_DIR 
00:12:02.400  #define SPDK_CONFIG_VHOST 1
00:12:02.400  #define SPDK_CONFIG_VIRTIO 1
00:12:02.400  #undef SPDK_CONFIG_VTUNE
00:12:02.400  #define SPDK_CONFIG_VTUNE_DIR 
00:12:02.400  #define SPDK_CONFIG_WERROR 1
00:12:02.400  #define SPDK_CONFIG_WPDK_DIR 
00:12:02.400  #undef SPDK_CONFIG_XNVME
00:12:02.400  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:12:02.400      05:57:23 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:12:02.400     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:02.400      05:57:23 reactor_set_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:12:02.400      05:57:23 reactor_set_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:02.400      05:57:23 reactor_set_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:02.401      05:57:23 reactor_set_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@6 -- # export PATH
00:12:02.401       05:57:23 reactor_set_interrupt -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:02.401        05:57:23 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:02.401       05:57:23 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:02.401       05:57:23 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:12:02.401       05:57:23 reactor_set_interrupt -- pm/common@68 -- # uname -s
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]=
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E'
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]]
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:12:02.401      05:57:23 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@70 -- # :
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 1
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:12:02.401     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : v22.11.4
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : true
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@154 -- # :
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@169 -- # :
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@175 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@177 -- # : 0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@206 -- # cat
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@269 -- # _LCOV=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@275 -- # lcov_opt=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@279 -- # export valgrind=
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@279 -- # valgrind=
00:12:02.402      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@285 -- # uname -s
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:12:02.402     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@289 -- # MAKE=make
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@309 -- # TEST_MODE=
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@331 -- # [[ -z 83336 ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@331 -- # kill -0 83336
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@344 -- # local mount target_dir
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:12:02.403      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.xH2A9t
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.xH2A9t/tests/interrupt /tmp/spdk.xH2A9t
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@340 -- # df -T
00:12:02.403      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1249308672
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=8908382208
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=10756370432
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=6266687488
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270115840
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1254010880
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=97189064704
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=2513715200
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:12:02.403  * Looking for test storage...
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@381 -- # local target_space new_size
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:12:02.403      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.403      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@385 -- # mount=/
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@387 -- # target_space=8908382208
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@394 -- # new_size=12970962944
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.403  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:02.403     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@402 -- # return 0
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set -o errtrace
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # shopt -s extdebug
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # true
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # xtrace_fd
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x
00:12:02.404     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:02.404      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version
00:12:02.404      05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:02.663      05:57:23 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:02.663     05:57:23 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:02.663  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.663  		--rc genhtml_branch_coverage=1
00:12:02.663  		--rc genhtml_function_coverage=1
00:12:02.663  		--rc genhtml_legend=1
00:12:02.663  		--rc geninfo_all_blocks=1
00:12:02.663  		--rc geninfo_unexecuted_blocks=1
00:12:02.663  		
00:12:02.663  		'
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:02.663  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.663  		--rc genhtml_branch_coverage=1
00:12:02.663  		--rc genhtml_function_coverage=1
00:12:02.663  		--rc genhtml_legend=1
00:12:02.663  		--rc geninfo_all_blocks=1
00:12:02.663  		--rc geninfo_unexecuted_blocks=1
00:12:02.663  		
00:12:02.663  		'
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:02.663  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.663  		--rc genhtml_branch_coverage=1
00:12:02.663  		--rc genhtml_function_coverage=1
00:12:02.663  		--rc genhtml_legend=1
00:12:02.663  		--rc geninfo_all_blocks=1
00:12:02.663  		--rc geninfo_unexecuted_blocks=1
00:12:02.663  		
00:12:02.663  		'
00:12:02.663     05:57:23 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:02.663  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:02.663  		--rc genhtml_branch_coverage=1
00:12:02.663  		--rc genhtml_function_coverage=1
00:12:02.663  		--rc genhtml_legend=1
00:12:02.663  		--rc geninfo_all_blocks=1
00:12:02.663  		--rc geninfo_unexecuted_blocks=1
00:12:02.663  		
00:12:02.663  		'
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:12:02.663    05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=83398
00:12:02.663   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:02.664   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:02.664   05:57:23 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 83398 /var/tmp/spdk.sock
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 83398 ']'
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:02.664  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:02.664   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:02.664  [2024-11-18 05:57:23.476519] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:02.664  [2024-11-18 05:57:23.476713] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83398 ]
00:12:02.664  [2024-11-18 05:57:23.637323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:02.923  [2024-11-18 05:57:23.665605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:02.923  [2024-11-18 05:57:23.665687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:02.923  [2024-11-18 05:57:23.665793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:02.923  [2024-11-18 05:57:23.717178] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:02.923   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:02.923   05:57:23 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:12:02.923   05:57:23 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem
00:12:02.923   05:57:23 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:03.182  Malloc0
00:12:03.182  Malloc1
00:12:03.182  Malloc2
00:12:03.182   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio
00:12:03.182    05:57:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:12:03.182   05:57:24 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:03.182   05:57:24 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:03.182  5000+0 records in
00:12:03.182  5000+0 records out
00:12:03.182  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0228246 s, 449 MB/s
00:12:03.182   05:57:24 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:03.440  AIO0
00:12:03.440   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 83398
00:12:03.440   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 83398 without_thd
00:12:03.440   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=83398
00:12:03.440   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd
00:12:03.440   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:12:03.440    05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:12:03.440    05:57:24 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:12:03.440    05:57:24 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:03.440    05:57:24 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:12:03.440    05:57:24 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:03.440     05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:03.440     05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:12:03.699   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:12:03.699    05:57:24 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:03.699     05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:03.699     05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:03.958    05:57:24 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:12:03.958  spdk_thread ids are 1 on reactor0.
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83398 0
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83398 0 idle
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:03.958   05:57:24 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:03.958    05:57:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:03.958    05:57:24 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83398 root      20   0   20.1t  66816  30080 S   0.0   0.5   0:00.24 reactor_0'
00:12:04.216    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83398 root 20 0 20.1t 66816 30080 S 0.0 0.5 0:00.24 reactor_0
00:12:04.216    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:04.216    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83398 1
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83398 1 idle
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:04.216   05:57:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:04.216    05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:04.216    05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83407 root      20   0   20.1t  66816  30080 S   0.0   0.5   0:00.00 reactor_1'
00:12:04.475    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83407 root 20 0 20.1t 66816 30080 S 0.0 0.5 0:00.00 reactor_1
00:12:04.475    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:04.475    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83398 2
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83398 2 idle
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:04.475   05:57:25 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:04.475    05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:04.475    05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:04.733   05:57:25 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83408 root      20   0   20.1t  66816  30080 S   0.0   0.5   0:00.00 reactor_2'
00:12:04.733    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83408 root 20 0 20.1t 66816 30080 S 0.0 0.5 0:00.00 reactor_2
00:12:04.733    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:04.733    05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']'
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}"
00:12:04.734   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2
00:12:05.038  [2024-11-18 05:57:25.783407] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:05.038   05:57:25 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:12:05.324  [2024-11-18 05:57:26.051093] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:12:05.324  [2024-11-18 05:57:26.051994] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:12:05.324  [2024-11-18 05:57:26.278958] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:12:05.324  [2024-11-18 05:57:26.279729] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 83398 0
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 83398 0 busy
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:05.324   05:57:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:05.582    05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:05.582    05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83398 root      20   0   20.1t  72576  30080 R  99.9   0.6   0:00.71 reactor_0'
00:12:05.582    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83398 root 20 0 20.1t 72576 30080 R 99.9 0.6 0:00.71 reactor_0
00:12:05.582    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:05.582    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 83398 2
00:12:05.582   05:57:26 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 83398 2 busy
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:05.583   05:57:26 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:05.583    05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:05.583    05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:05.841   05:57:26 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83408 root      20   0   20.1t  72576  30080 R  99.9   0.6   0:00.45 reactor_2'
00:12:05.842    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83408 root 20 0 20.1t 72576 30080 R 99.9 0.6 0:00.45 reactor_2
00:12:05.842    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:05.842    05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:05.842   05:57:26 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:12:06.100  [2024-11-18 05:57:27.002947] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:12:06.100  [2024-11-18 05:57:27.003893] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']'
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 83398 2
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83398 2 idle
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:06.100   05:57:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:06.101   05:57:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:06.101    05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:06.101    05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83408 root      20   0   20.1t  72576  30080 S   0.0   0.6   0:00.71 reactor_2'
00:12:06.360    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:06.360    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83408 root 20 0 20.1t 72576 30080 S 0.0 0.6 0:00.71 reactor_2
00:12:06.360    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:06.360   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:12:06.619  [2024-11-18 05:57:27.494907] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:12:06.619  [2024-11-18 05:57:27.495574] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:06.619   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']'
00:12:06.619   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}"
00:12:06.619   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1
00:12:06.878  [2024-11-18 05:57:27.767441] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 83398 0
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83398 0 idle
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83398
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:06.878   05:57:27 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:06.878    05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83398 -w 256
00:12:06.878    05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:07.138   05:57:27 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83398 root      20   0   20.1t  72704  30080 S   0.0   0.6   0:01.69 reactor_0'
00:12:07.138    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83398 root 20 0 20.1t 72704 30080 S 0.0 0.6 0:01.69 reactor_0
00:12:07.138    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:07.138    05:57:27 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT
00:12:07.138   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 83398
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 83398 ']'
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 83398
00:12:07.139    05:57:28 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:07.139    05:57:28 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83398
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:07.139  killing process with pid 83398
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83398'
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 83398
00:12:07.139   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 83398
00:12:07.397   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=83528
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:07.398   05:57:28 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 83528 /var/tmp/spdk.sock
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 83528 ']'
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:07.398  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:07.398   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:07.398  [2024-11-18 05:57:28.267204] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:07.398  [2024-11-18 05:57:28.267395] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83528 ]
00:12:07.656  [2024-11-18 05:57:28.422996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:07.656  [2024-11-18 05:57:28.446364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:07.656  [2024-11-18 05:57:28.446471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:07.656  [2024-11-18 05:57:28.446554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:07.656  [2024-11-18 05:57:28.490850] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:07.656   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:07.656   05:57:28 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:12:07.656   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem
00:12:07.656   05:57:28 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:07.968  Malloc0
00:12:07.968  Malloc1
00:12:07.968  Malloc2
00:12:07.968   05:57:28 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio
00:12:07.968    05:57:28 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:12:07.968   05:57:28 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:07.968   05:57:28 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:07.968  5000+0 records in
00:12:07.968  5000+0 records out
00:12:07.968  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0197283 s, 519 MB/s
00:12:07.968   05:57:28 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:08.227  AIO0
00:12:08.227   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 83528
00:12:08.227   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 83528
00:12:08.227   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=83528
00:12:08.227   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=
00:12:08.227   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:12:08.227    05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:12:08.227    05:57:29 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:12:08.227    05:57:29 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:08.227    05:57:29 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:12:08.227    05:57:29 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:08.227     05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:08.227     05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:08.485    05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:12:08.486   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:12:08.486    05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:12:08.486    05:57:29 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:12:08.486    05:57:29 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:08.486    05:57:29 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:12:08.486    05:57:29 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:08.486     05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:08.486     05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:08.745    05:57:29 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:12:08.745  spdk_thread ids are 1 on reactor0.
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83528 0
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83528 0 idle
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:08.745   05:57:29 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:08.746   05:57:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:08.746   05:57:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:08.746    05:57:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:08.746    05:57:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83528 root      20   0   20.1t  66816  30080 S   0.0   0.5   0:00.21 reactor_0'
00:12:09.005    05:57:29 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83528 root 20 0 20.1t 66816 30080 S 0.0 0.5 0:00.21 reactor_0
00:12:09.005    05:57:29 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:09.005    05:57:29 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83528 1
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83528 1 idle
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:09.005   05:57:29 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:09.005    05:57:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:09.005    05:57:29 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83531 root      20   0   20.1t  66816  30080 S   0.0   0.5   0:00.00 reactor_1'
00:12:09.264    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83531 root 20 0 20.1t 66816 30080 S 0.0 0.5 0:00.00 reactor_1
00:12:09.264    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:09.264    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:09.264   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 83528 2
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83528 2 idle
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:09.265   05:57:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:09.265    05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:09.265    05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83532 root      20   0   20.1t  66944  30080 S   0.0   0.5   0:00.00 reactor_2'
00:12:09.524    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83532 root 20 0 20.1t 66944 30080 S 0.0 0.5 0:00.00 reactor_2
00:12:09.524    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:09.524    05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']'
00:12:09.524   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:12:09.783  [2024-11-18 05:57:30.531641] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:12:09.783  [2024-11-18 05:57:30.531973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode.
00:12:09.783  [2024-11-18 05:57:30.532714] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:09.783   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:12:10.043  [2024-11-18 05:57:30.799463] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:12:10.043  [2024-11-18 05:57:30.800234] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 83528 0
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 83528 0 busy
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:10.043   05:57:30 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:10.043    05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:10.043    05:57:30 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83528 root      20   0   20.1t  72576  30080 R  99.9   0.6   0:00.73 reactor_0'
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83528 root 20 0 20.1t 72576 30080 R 99.9 0.6 0:00.73 reactor_0
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 83528 2
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 83528 2 busy
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83532 root      20   0   20.1t  72576  30080 R  90.9   0.6   0:00.44 reactor_2'
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83532 root 20 0 20.1t 72576 30080 R 90.9 0.6 0:00.44 reactor_2
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:10.302    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=90.9
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=90
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:10.302   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:12:10.561  [2024-11-18 05:57:31.511756] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:12:10.561  [2024-11-18 05:57:31.512455] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']'
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 83528 2
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83528 2 idle
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:10.561   05:57:31 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:10.561    05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:10.561    05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83532 root      20   0   20.1t  72576  30080 S   0.0   0.6   0:00.70 reactor_2'
00:12:10.820    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83532 root 20 0 20.1t 72576 30080 S 0.0 0.6 0:00.70 reactor_2
00:12:10.820    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:10.820    05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:10.820   05:57:31 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:12:11.079  [2024-11-18 05:57:32.015836] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:12:11.079  [2024-11-18 05:57:32.016594] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode.
00:12:11.079  [2024-11-18 05:57:32.016682] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']'
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 83528 0
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 83528 0 idle
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=83528
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:11.079   05:57:32 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:11.079    05:57:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 83528 -w 256
00:12:11.079    05:57:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor='  83528 root      20   0   20.1t  72704  30080 S  10.0   0.6   0:01.72 reactor_0'
00:12:11.338    05:57:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 83528 root 20 0 20.1t 72704 30080 S 10.0 0.6 0:01.72 reactor_0
00:12:11.338    05:57:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:11.338    05:57:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=10.0
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=10
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:12:11.338   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 83528
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 83528 ']'
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 83528
00:12:11.338    05:57:32 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:11.338    05:57:32 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83528
00:12:11.338  killing process with pid 83528
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83528'
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 83528
00:12:11.338   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 83528
00:12:11.597   05:57:32 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup
00:12:11.597   05:57:32 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:11.597  ************************************
00:12:11.597  END TEST reactor_set_interrupt
00:12:11.597  ************************************
00:12:11.597  
00:12:11.597  real	0m9.543s
00:12:11.597  user	0m9.932s
00:12:11.597  sys	0m1.652s
00:12:11.597   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:11.597   05:57:32 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:11.597   05:57:32  -- spdk/autotest.sh@184 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:11.597   05:57:32  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:11.597   05:57:32  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:11.597   05:57:32  -- common/autotest_common.sh@10 -- # set +x
00:12:11.597  ************************************
00:12:11.597  START TEST reap_unregistered_poller
00:12:11.597  ************************************
00:12:11.597   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:11.859  * Looking for test storage...
00:12:11.859  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:11.859    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:11.859     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version
00:12:11.859     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:11.859    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:12:11.859    05:57:32 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:11.860     05:57:32 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:11.860    05:57:32 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:12:11.860    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:11.860    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:11.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.860  		--rc genhtml_branch_coverage=1
00:12:11.860  		--rc genhtml_function_coverage=1
00:12:11.860  		--rc genhtml_legend=1
00:12:11.860  		--rc geninfo_all_blocks=1
00:12:11.860  		--rc geninfo_unexecuted_blocks=1
00:12:11.860  		
00:12:11.860  		'
00:12:11.860    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:11.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.860  		--rc genhtml_branch_coverage=1
00:12:11.860  		--rc genhtml_function_coverage=1
00:12:11.860  		--rc genhtml_legend=1
00:12:11.860  		--rc geninfo_all_blocks=1
00:12:11.860  		--rc geninfo_unexecuted_blocks=1
00:12:11.860  		
00:12:11.860  		'
00:12:11.860    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:11.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.860  		--rc genhtml_branch_coverage=1
00:12:11.860  		--rc genhtml_function_coverage=1
00:12:11.860  		--rc genhtml_legend=1
00:12:11.860  		--rc geninfo_all_blocks=1
00:12:11.860  		--rc geninfo_unexecuted_blocks=1
00:12:11.860  		
00:12:11.860  		'
00:12:11.860    05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:11.860  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:11.860  		--rc genhtml_branch_coverage=1
00:12:11.860  		--rc genhtml_function_coverage=1
00:12:11.860  		--rc genhtml_legend=1
00:12:11.860  		--rc geninfo_all_blocks=1
00:12:11.860  		--rc geninfo_unexecuted_blocks=1
00:12:11.860  		
00:12:11.860  		'
00:12:11.860   05:57:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:12:11.860      05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:11.860     05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:11.860    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:11.860     05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:12:11.860    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:11.860    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:12:11.860     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:11.860      05:57:32 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:11.861      05:57:32 reap_unregistered_poller -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:11.861     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:11.861        05:57:32 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:11.861       05:57:32 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:12:11.861  #define SPDK_CONFIG_H
00:12:11.861  #define SPDK_CONFIG_AIO_FSDEV 1
00:12:11.861  #define SPDK_CONFIG_APPS 1
00:12:11.861  #define SPDK_CONFIG_ARCH native
00:12:11.861  #define SPDK_CONFIG_ASAN 1
00:12:11.861  #undef SPDK_CONFIG_AVAHI
00:12:11.861  #undef SPDK_CONFIG_CET
00:12:11.861  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:12:11.861  #define SPDK_CONFIG_COVERAGE 1
00:12:11.861  #define SPDK_CONFIG_CROSS_PREFIX 
00:12:11.861  #undef SPDK_CONFIG_CRYPTO
00:12:11.861  #undef SPDK_CONFIG_CRYPTO_MLX5
00:12:11.861  #undef SPDK_CONFIG_CUSTOMOCF
00:12:11.861  #undef SPDK_CONFIG_DAOS
00:12:11.861  #define SPDK_CONFIG_DAOS_DIR 
00:12:11.861  #define SPDK_CONFIG_DEBUG 1
00:12:11.861  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:12:11.861  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:12:11.861  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:12:11.861  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:12:11.861  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:12:11.861  #undef SPDK_CONFIG_DPDK_UADK
00:12:11.861  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:11.861  #define SPDK_CONFIG_EXAMPLES 1
00:12:11.861  #undef SPDK_CONFIG_FC
00:12:11.861  #define SPDK_CONFIG_FC_PATH 
00:12:11.861  #define SPDK_CONFIG_FIO_PLUGIN 1
00:12:11.861  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:12:11.861  #define SPDK_CONFIG_FSDEV 1
00:12:11.861  #undef SPDK_CONFIG_FUSE
00:12:11.861  #undef SPDK_CONFIG_FUZZER
00:12:11.861  #define SPDK_CONFIG_FUZZER_LIB 
00:12:11.861  #undef SPDK_CONFIG_GOLANG
00:12:11.861  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:12:11.861  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:12:11.861  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:12:11.861  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:12:11.861  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:12:11.861  #undef SPDK_CONFIG_HAVE_LIBBSD
00:12:11.861  #undef SPDK_CONFIG_HAVE_LZ4
00:12:11.861  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:12:11.861  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:12:11.861  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:12:11.861  #define SPDK_CONFIG_IDXD 1
00:12:11.861  #define SPDK_CONFIG_IDXD_KERNEL 1
00:12:11.861  #undef SPDK_CONFIG_IPSEC_MB
00:12:11.861  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:12:11.861  #define SPDK_CONFIG_ISAL 1
00:12:11.861  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:12:11.861  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:12:11.861  #define SPDK_CONFIG_LIBDIR 
00:12:11.861  #undef SPDK_CONFIG_LTO
00:12:11.861  #define SPDK_CONFIG_MAX_LCORES 128
00:12:11.861  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:12:11.861  #define SPDK_CONFIG_NVME_CUSE 1
00:12:11.861  #undef SPDK_CONFIG_OCF
00:12:11.861  #define SPDK_CONFIG_OCF_PATH 
00:12:11.861  #define SPDK_CONFIG_OPENSSL_PATH 
00:12:11.861  #undef SPDK_CONFIG_PGO_CAPTURE
00:12:11.861  #define SPDK_CONFIG_PGO_DIR 
00:12:11.861  #undef SPDK_CONFIG_PGO_USE
00:12:11.861  #define SPDK_CONFIG_PREFIX /usr/local
00:12:11.861  #undef SPDK_CONFIG_RAID5F
00:12:11.861  #undef SPDK_CONFIG_RBD
00:12:11.861  #define SPDK_CONFIG_RDMA 1
00:12:11.861  #define SPDK_CONFIG_RDMA_PROV verbs
00:12:11.861  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:12:11.861  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:12:11.861  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:12:11.861  #undef SPDK_CONFIG_SHARED
00:12:11.861  #undef SPDK_CONFIG_SMA
00:12:11.861  #define SPDK_CONFIG_TESTS 1
00:12:11.861  #undef SPDK_CONFIG_TSAN
00:12:11.861  #define SPDK_CONFIG_UBLK 1
00:12:11.861  #define SPDK_CONFIG_UBSAN 1
00:12:11.861  #define SPDK_CONFIG_UNIT_TESTS 1
00:12:11.861  #undef SPDK_CONFIG_URING
00:12:11.861  #define SPDK_CONFIG_URING_PATH 
00:12:11.861  #undef SPDK_CONFIG_URING_ZNS
00:12:11.861  #undef SPDK_CONFIG_USDT
00:12:11.861  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:12:11.861  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:12:11.861  #undef SPDK_CONFIG_VFIO_USER
00:12:11.861  #define SPDK_CONFIG_VFIO_USER_DIR 
00:12:11.861  #define SPDK_CONFIG_VHOST 1
00:12:11.861  #define SPDK_CONFIG_VIRTIO 1
00:12:11.861  #undef SPDK_CONFIG_VTUNE
00:12:11.861  #define SPDK_CONFIG_VTUNE_DIR 
00:12:11.861  #define SPDK_CONFIG_WERROR 1
00:12:11.861  #define SPDK_CONFIG_WPDK_DIR 
00:12:11.861  #undef SPDK_CONFIG_XNVME
00:12:11.861  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:12:11.861      05:57:32 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:12:11.861     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:11.861      05:57:32 reap_unregistered_poller -- scripts/common.sh@15 -- # shopt -s extglob
00:12:11.861      05:57:32 reap_unregistered_poller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:11.861      05:57:32 reap_unregistered_poller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:11.861      05:57:32 reap_unregistered_poller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:11.861       05:57:32 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:11.861       05:57:32 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:11.861       05:57:32 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:11.862       05:57:32 reap_unregistered_poller -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:11.862       05:57:32 reap_unregistered_poller -- paths/export.sh@6 -- # export PATH
00:12:11.862       05:57:32 reap_unregistered_poller -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:11.862        05:57:32 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:11.862       05:57:32 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:11.862       05:57:32 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:12:11.862       05:57:32 reap_unregistered_poller -- pm/common@68 -- # uname -s
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]=
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E'
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]]
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:12:11.862      05:57:32 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@70 -- # :
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 1
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : v22.11.4
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : true
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0
00:12:11.862     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@154 -- # :
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@169 -- # :
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@175 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@177 -- # : 0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@206 -- # cat
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@269 -- # _LCOV=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@275 -- # lcov_opt=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@279 -- # export valgrind=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@279 -- # valgrind=
00:12:11.863      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@285 -- # uname -s
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@289 -- # MAKE=make
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@309 -- # TEST_MODE=
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@331 -- # [[ -z 83681 ]]
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@331 -- # kill -0 83681
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@344 -- # local mount target_dir
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:12:11.863     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:12:11.864      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.dbfo1G
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.dbfo1G/tests/interrupt /tmp/spdk.dbfo1G
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@340 -- # df -T
00:12:11.864      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1249308672
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4714496
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=8908341248
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=19681529856
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=10756411392
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=6266687488
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=6270115840
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda16
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=777306112
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=923156480
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=81207296
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=103000064
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=6395904
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1254010880
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1254023168
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=12288
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu24-vg-autotest/ubuntu2404-libvirt/output
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=97188618240
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=2514161664
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:12:11.864  * Looking for test storage...
00:12:11.864     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@381 -- # local target_space new_size
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:12:12.124      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:12.124      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@385 -- # mount=/
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@387 -- # target_space=8908341248
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@394 -- # new_size=12971003904
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:12.124  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@402 -- # return 0
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set -o errtrace
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # shopt -s extdebug
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # true
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # xtrace_fd
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:12.124      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version
00:12:12.124      05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:12.124      05:57:32 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:12.124     05:57:32 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:12.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.124  		--rc genhtml_branch_coverage=1
00:12:12.124  		--rc genhtml_function_coverage=1
00:12:12.124  		--rc genhtml_legend=1
00:12:12.124  		--rc geninfo_all_blocks=1
00:12:12.124  		--rc geninfo_unexecuted_blocks=1
00:12:12.124  		
00:12:12.124  		'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:12.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.124  		--rc genhtml_branch_coverage=1
00:12:12.124  		--rc genhtml_function_coverage=1
00:12:12.124  		--rc genhtml_legend=1
00:12:12.124  		--rc geninfo_all_blocks=1
00:12:12.124  		--rc geninfo_unexecuted_blocks=1
00:12:12.124  		
00:12:12.124  		'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:12.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.124  		--rc genhtml_branch_coverage=1
00:12:12.124  		--rc genhtml_function_coverage=1
00:12:12.124  		--rc genhtml_legend=1
00:12:12.124  		--rc geninfo_all_blocks=1
00:12:12.124  		--rc geninfo_unexecuted_blocks=1
00:12:12.124  		
00:12:12.124  		'
00:12:12.124     05:57:32 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:12.124  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:12.124  		--rc genhtml_branch_coverage=1
00:12:12.124  		--rc genhtml_function_coverage=1
00:12:12.124  		--rc genhtml_legend=1
00:12:12.124  		--rc geninfo_all_blocks=1
00:12:12.124  		--rc geninfo_unexecuted_blocks=1
00:12:12.124  		
00:12:12.124  		'
00:12:12.124    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:12:12.124    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:12.125    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:12:12.125    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:12:12.125    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:12:12.125    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:12:12.125    05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=83748
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:12.125   05:57:32 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 83748 /var/tmp/spdk.sock
00:12:12.125  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@835 -- # '[' -z 83748 ']'
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:12.125   05:57:32 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:12.125  [2024-11-18 05:57:32.990852] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:12.125  [2024-11-18 05:57:32.991029] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83748 ]
00:12:12.384  [2024-11-18 05:57:33.143438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:12.384  [2024-11-18 05:57:33.167904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:12.384  [2024-11-18 05:57:33.167978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:12.384  [2024-11-18 05:57:33.168046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:12.384  [2024-11-18 05:57:33.212612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:12.384   05:57:33 reap_unregistered_poller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:12.384   05:57:33 reap_unregistered_poller -- common/autotest_common.sh@868 -- # return 0
00:12:12.384    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers
00:12:12.384    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:12.384    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:12.384    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]'
00:12:12.384    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{
00:12:12.384    "name": "app_thread",
00:12:12.384    "id": 1,
00:12:12.384    "active_pollers": [],
00:12:12.384    "timed_pollers": [
00:12:12.384      {
00:12:12.384        "name": "rpc_subsystem_poll_servers",
00:12:12.384        "id": 1,
00:12:12.384        "state": "waiting",
00:12:12.384        "run_count": 0,
00:12:12.384        "busy_count": 0,
00:12:12.384        "period_ticks": 8800000
00:12:12.384      }
00:12:12.384    ],
00:12:12.384    "paused_pollers": []
00:12:12.384  }'
00:12:12.384    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name'
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers=
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' '
00:12:12.384    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name'
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio
00:12:12.384    05:57:33 reap_unregistered_poller -- interrupt/common.sh@77 -- # uname -s
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:12.384  5000+0 records in
00:12:12.384  5000+0 records out
00:12:12.384  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0209997 s, 488 MB/s
00:12:12.384   05:57:33 reap_unregistered_poller -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:12.644  AIO0
00:12:12.644   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:12:12.903   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1
00:12:13.161    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers
00:12:13.161    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]'
00:12:13.161    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:13.162    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:13.162    05:57:33 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:13.162   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{
00:12:13.162    "name": "app_thread",
00:12:13.162    "id": 1,
00:12:13.162    "active_pollers": [],
00:12:13.162    "timed_pollers": [
00:12:13.162      {
00:12:13.162        "name": "rpc_subsystem_poll_servers",
00:12:13.162        "id": 1,
00:12:13.162        "state": "waiting",
00:12:13.162        "run_count": 0,
00:12:13.162        "busy_count": 0,
00:12:13.162        "period_ticks": 8800000
00:12:13.162      }
00:12:13.162    ],
00:12:13.162    "paused_pollers": []
00:12:13.162  }'
00:12:13.162    05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name'
00:12:13.162   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers=
00:12:13.162   05:57:33 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' '
00:12:13.162    05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name'
00:12:13.162   05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers
00:12:13.162   05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[  rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]]
00:12:13.162   05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:12:13.162   05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 83748
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' -z 83748 ']'
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@958 -- # kill -0 83748
00:12:13.162    05:57:34 reap_unregistered_poller -- common/autotest_common.sh@959 -- # uname
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:13.162    05:57:34 reap_unregistered_poller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83748
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83748'
00:12:13.162  killing process with pid 83748
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@973 -- # kill 83748
00:12:13.162   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@978 -- # wait 83748
00:12:13.421   05:57:34 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup
00:12:13.421   05:57:34 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:13.421  ************************************
00:12:13.421  END TEST reap_unregistered_poller
00:12:13.421  ************************************
00:12:13.421  
00:12:13.421  real	0m1.698s
00:12:13.421  user	0m1.270s
00:12:13.421  sys	0m0.500s
00:12:13.421   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:13.421   05:57:34 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:13.421   05:57:34  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:12:13.421    05:57:34  -- spdk/autotest.sh@194 -- # uname -s
00:12:13.421   05:57:34  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:12:13.421   05:57:34  -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]]
00:12:13.421   05:57:34  -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]]
00:12:13.421   05:57:34  -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:12:13.421   05:57:34  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:13.421   05:57:34  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:13.421   05:57:34  -- common/autotest_common.sh@10 -- # set +x
00:12:13.421  ************************************
00:12:13.421  START TEST spdk_dd
00:12:13.421  ************************************
00:12:13.421   05:57:34 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:12:13.421  * Looking for test storage...
00:12:13.421  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:13.421     05:57:34 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:13.421      05:57:34 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:13.421      05:57:34 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@336 -- # IFS=.-:
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@337 -- # IFS=.-:
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@338 -- # local 'op=<'
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@344 -- # case "$op" in
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@345 -- # : 1
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@365 -- # decimal 1
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@353 -- # local d=1
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@355 -- # echo 1
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@366 -- # decimal 2
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@353 -- # local d=2
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:13.681      05:57:34 spdk_dd -- scripts/common.sh@355 -- # echo 2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@368 -- # return 0
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:13.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:13.681  		--rc genhtml_branch_coverage=1
00:12:13.681  		--rc genhtml_function_coverage=1
00:12:13.681  		--rc genhtml_legend=1
00:12:13.681  		--rc geninfo_all_blocks=1
00:12:13.681  		--rc geninfo_unexecuted_blocks=1
00:12:13.681  		
00:12:13.681  		'
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:13.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:13.681  		--rc genhtml_branch_coverage=1
00:12:13.681  		--rc genhtml_function_coverage=1
00:12:13.681  		--rc genhtml_legend=1
00:12:13.681  		--rc geninfo_all_blocks=1
00:12:13.681  		--rc geninfo_unexecuted_blocks=1
00:12:13.681  		
00:12:13.681  		'
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:13.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:13.681  		--rc genhtml_branch_coverage=1
00:12:13.681  		--rc genhtml_function_coverage=1
00:12:13.681  		--rc genhtml_legend=1
00:12:13.681  		--rc geninfo_all_blocks=1
00:12:13.681  		--rc geninfo_unexecuted_blocks=1
00:12:13.681  		
00:12:13.681  		'
00:12:13.681     05:57:34 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:13.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:13.681  		--rc genhtml_branch_coverage=1
00:12:13.681  		--rc genhtml_function_coverage=1
00:12:13.681  		--rc genhtml_legend=1
00:12:13.681  		--rc geninfo_all_blocks=1
00:12:13.681  		--rc geninfo_unexecuted_blocks=1
00:12:13.681  		
00:12:13.681  		'
00:12:13.681    05:57:34 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:13.681     05:57:34 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@6 -- # export PATH
00:12:13.681      05:57:34 spdk_dd -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:13.682   05:57:34 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:13.941  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:12:13.941  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:14.510   05:57:35 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace))
00:12:14.510    05:57:35 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace
00:12:14.510    05:57:35 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs
00:12:14.510    05:57:35 spdk_dd -- scripts/common.sh@313 -- # local nvmes
00:12:14.510    05:57:35 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]]
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@298 -- # local bdf=
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@233 -- # local class
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@234 -- # local subclass
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@235 -- # local progif
00:12:14.511       05:57:35 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@236 -- # class=01
00:12:14.511       05:57:35 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@237 -- # subclass=08
00:12:14.511       05:57:35 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@238 -- # progif=02
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@240 -- # hash lspci
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:12:14.511      05:57:35 spdk_dd -- scripts/common.sh@245 -- # tr -d '"'
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@18 -- # local i
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@27 -- # return 0
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:12:14.511     05:57:35 spdk_dd -- scripts/common.sh@323 -- # uname -s
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@328 -- # (( 1 ))
00:12:14.511    05:57:35 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:12:14.511   05:57:35 spdk_dd -- dd/dd.sh@13 -- # check_liburing
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@139 -- # local lib
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:14.511    05:57:35 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:14.511    05:57:35 spdk_dd -- dd/common.sh@137 -- # grep NEEDED
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n'
00:12:14.511  * spdk_dd linked to liburing
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:12:14.511   05:57:35 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:14.511    05:57:35 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:14.512    05:57:35 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:14.512    05:57:35 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:14.512   05:57:35 spdk_dd -- dd/common.sh@149 -- # [[ n != y ]]
00:12:14.512   05:57:35 spdk_dd -- dd/common.sh@150 -- # printf '* spdk_dd built with liburing, but no liburing support requested?\n'
00:12:14.512  * spdk_dd built with liburing, but no liburing support requested?
00:12:14.512   05:57:35 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1
00:12:14.512   05:57:35 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1
00:12:14.512   05:57:35 spdk_dd -- dd/common.sh@153 -- # return 0
00:12:14.512   05:57:35 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 ))
00:12:14.512   05:57:35 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:12:14.512   05:57:35 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:14.512   05:57:35 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:14.512   05:57:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:12:14.512  ************************************
00:12:14.512  START TEST spdk_dd_basic_rw
00:12:14.512  ************************************
00:12:14.512   05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:12:14.512  * Looking for test storage...
00:12:14.512  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:14.512     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-:
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-:
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<'
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:14.772  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.772  		--rc genhtml_branch_coverage=1
00:12:14.772  		--rc genhtml_function_coverage=1
00:12:14.772  		--rc genhtml_legend=1
00:12:14.772  		--rc geninfo_all_blocks=1
00:12:14.772  		--rc geninfo_unexecuted_blocks=1
00:12:14.772  		
00:12:14.772  		'
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:14.772  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.772  		--rc genhtml_branch_coverage=1
00:12:14.772  		--rc genhtml_function_coverage=1
00:12:14.772  		--rc genhtml_legend=1
00:12:14.772  		--rc geninfo_all_blocks=1
00:12:14.772  		--rc geninfo_unexecuted_blocks=1
00:12:14.772  		
00:12:14.772  		'
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:14.772  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.772  		--rc genhtml_branch_coverage=1
00:12:14.772  		--rc genhtml_function_coverage=1
00:12:14.772  		--rc genhtml_legend=1
00:12:14.772  		--rc geninfo_all_blocks=1
00:12:14.772  		--rc geninfo_unexecuted_blocks=1
00:12:14.772  		
00:12:14.772  		'
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:14.772  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:14.772  		--rc genhtml_branch_coverage=1
00:12:14.772  		--rc genhtml_function_coverage=1
00:12:14.772  		--rc genhtml_legend=1
00:12:14.772  		--rc geninfo_all_blocks=1
00:12:14.772  		--rc geninfo_unexecuted_blocks=1
00:12:14.772  		
00:12:14.772  		'
00:12:14.772    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # export PATH
00:12:14.772      05:57:35 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@")
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:14.772   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:14.772    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0
00:12:14.772    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id
00:12:14.772    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id
00:12:14.772     05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0'
00:12:15.034    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         19 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]]
00:12:15.034    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         19 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]]
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # :
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:15.035  ************************************
00:12:15.035  START TEST dd_bs_lt_native_bs
00:12:15.035  ************************************
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:15.035  {
00:12:15.035    "subsystems": [
00:12:15.035      {
00:12:15.035        "subsystem": "bdev",
00:12:15.035        "config": [
00:12:15.035          {
00:12:15.035            "params": {
00:12:15.035              "trtype": "pcie",
00:12:15.035              "traddr": "0000:00:10.0",
00:12:15.035              "name": "Nvme0"
00:12:15.035            },
00:12:15.035            "method": "bdev_nvme_attach_controller"
00:12:15.035          },
00:12:15.035          {
00:12:15.035            "method": "bdev_wait_for_examine"
00:12:15.035          }
00:12:15.035        ]
00:12:15.035      }
00:12:15.035    ]
00:12:15.035  }
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:15.035    05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:15.035   05:57:35 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:15.035  [2024-11-18 05:57:35.938467] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:15.035  [2024-11-18 05:57:35.938688] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84023 ]
00:12:15.295  [2024-11-18 05:57:36.099750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:15.295  [2024-11-18 05:57:36.126067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:15.295  [2024-11-18 05:57:36.260974] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size
00:12:15.295  [2024-11-18 05:57:36.261091] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:15.554  [2024-11-18 05:57:36.351820] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:15.554  
00:12:15.554  real	0m0.575s
00:12:15.554  user	0m0.319s
00:12:15.554  sys	0m0.175s
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:15.554  ************************************
00:12:15.554  END TEST dd_bs_lt_native_bs
00:12:15.554  ************************************
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:15.554  ************************************
00:12:15.554  START TEST dd_rw
00:12:15.554  ************************************
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64)
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:15.554   05:57:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:16.122   05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62
00:12:16.122    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:16.122    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:16.122    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:16.122  {
00:12:16.122    "subsystems": [
00:12:16.122      {
00:12:16.122        "subsystem": "bdev",
00:12:16.122        "config": [
00:12:16.122          {
00:12:16.122            "params": {
00:12:16.122              "trtype": "pcie",
00:12:16.122              "traddr": "0000:00:10.0",
00:12:16.122              "name": "Nvme0"
00:12:16.122            },
00:12:16.122            "method": "bdev_nvme_attach_controller"
00:12:16.122          },
00:12:16.122          {
00:12:16.122            "method": "bdev_wait_for_examine"
00:12:16.122          }
00:12:16.122        ]
00:12:16.122      }
00:12:16.122    ]
00:12:16.122  }
00:12:16.381  [2024-11-18 05:57:37.137108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:16.381  [2024-11-18 05:57:37.137315] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84055 ]
00:12:16.381  [2024-11-18 05:57:37.291510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:16.381  [2024-11-18 05:57:37.314837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:16.641  
[2024-11-18T05:57:37.619Z] Copying: 60/60 [kB] (average 19 MBps)
00:12:16.641  
00:12:16.900   05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62
00:12:16.900    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:16.900    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:16.900    05:57:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:16.900  {
00:12:16.900    "subsystems": [
00:12:16.900      {
00:12:16.900        "subsystem": "bdev",
00:12:16.900        "config": [
00:12:16.900          {
00:12:16.900            "params": {
00:12:16.900              "trtype": "pcie",
00:12:16.900              "traddr": "0000:00:10.0",
00:12:16.900              "name": "Nvme0"
00:12:16.900            },
00:12:16.900            "method": "bdev_nvme_attach_controller"
00:12:16.900          },
00:12:16.900          {
00:12:16.900            "method": "bdev_wait_for_examine"
00:12:16.900          }
00:12:16.900        ]
00:12:16.900      }
00:12:16.900    ]
00:12:16.900  }
00:12:16.900  [2024-11-18 05:57:37.681010] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:16.900  [2024-11-18 05:57:37.681228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84068 ]
00:12:16.900  [2024-11-18 05:57:37.837502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:16.900  [2024-11-18 05:57:37.858280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:17.158  
[2024-11-18T05:57:38.394Z] Copying: 60/60 [kB] (average 19 MBps)
00:12:17.416  
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:17.416   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:17.416    05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:17.416    05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:17.416    05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:17.416  {
00:12:17.416    "subsystems": [
00:12:17.416      {
00:12:17.416        "subsystem": "bdev",
00:12:17.416        "config": [
00:12:17.416          {
00:12:17.416            "params": {
00:12:17.416              "trtype": "pcie",
00:12:17.416              "traddr": "0000:00:10.0",
00:12:17.416              "name": "Nvme0"
00:12:17.416            },
00:12:17.416            "method": "bdev_nvme_attach_controller"
00:12:17.416          },
00:12:17.416          {
00:12:17.416            "method": "bdev_wait_for_examine"
00:12:17.416          }
00:12:17.416        ]
00:12:17.416      }
00:12:17.416    ]
00:12:17.416  }
00:12:17.416  [2024-11-18 05:57:38.236398] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:17.416  [2024-11-18 05:57:38.236607] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84083 ]
00:12:17.416  [2024-11-18 05:57:38.390934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:17.681  [2024-11-18 05:57:38.412243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:17.681  
[2024-11-18T05:57:38.939Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:12:17.961  
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:17.961   05:57:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:18.534   05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62
00:12:18.534    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:18.534    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:18.534    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:18.534  {
00:12:18.534    "subsystems": [
00:12:18.534      {
00:12:18.534        "subsystem": "bdev",
00:12:18.534        "config": [
00:12:18.534          {
00:12:18.534            "params": {
00:12:18.534              "trtype": "pcie",
00:12:18.534              "traddr": "0000:00:10.0",
00:12:18.534              "name": "Nvme0"
00:12:18.534            },
00:12:18.534            "method": "bdev_nvme_attach_controller"
00:12:18.534          },
00:12:18.534          {
00:12:18.534            "method": "bdev_wait_for_examine"
00:12:18.534          }
00:12:18.534        ]
00:12:18.534      }
00:12:18.534    ]
00:12:18.534  }
00:12:18.534  [2024-11-18 05:57:39.366101] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:18.534  [2024-11-18 05:57:39.366822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84102 ]
00:12:18.793  [2024-11-18 05:57:39.520058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:18.793  [2024-11-18 05:57:39.543350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:18.793  
[2024-11-18T05:57:40.030Z] Copying: 60/60 [kB] (average 58 MBps)
00:12:19.052  
00:12:19.052   05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62
00:12:19.052    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:19.052    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:19.052    05:57:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:19.052  {
00:12:19.052    "subsystems": [
00:12:19.052      {
00:12:19.052        "subsystem": "bdev",
00:12:19.052        "config": [
00:12:19.052          {
00:12:19.052            "params": {
00:12:19.052              "trtype": "pcie",
00:12:19.052              "traddr": "0000:00:10.0",
00:12:19.052              "name": "Nvme0"
00:12:19.052            },
00:12:19.052            "method": "bdev_nvme_attach_controller"
00:12:19.052          },
00:12:19.052          {
00:12:19.052            "method": "bdev_wait_for_examine"
00:12:19.052          }
00:12:19.052        ]
00:12:19.052      }
00:12:19.052    ]
00:12:19.052  }
00:12:19.052  [2024-11-18 05:57:39.913807] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:19.052  [2024-11-18 05:57:39.914008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84116 ]
00:12:19.311  [2024-11-18 05:57:40.070402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:19.311  [2024-11-18 05:57:40.090933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:19.311  
[2024-11-18T05:57:40.549Z] Copying: 60/60 [kB] (average 58 MBps)
00:12:19.571  
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:19.571   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:19.571    05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:19.571    05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:19.571    05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:19.571  {
00:12:19.571    "subsystems": [
00:12:19.571      {
00:12:19.571        "subsystem": "bdev",
00:12:19.571        "config": [
00:12:19.571          {
00:12:19.571            "params": {
00:12:19.571              "trtype": "pcie",
00:12:19.571              "traddr": "0000:00:10.0",
00:12:19.571              "name": "Nvme0"
00:12:19.571            },
00:12:19.571            "method": "bdev_nvme_attach_controller"
00:12:19.571          },
00:12:19.571          {
00:12:19.571            "method": "bdev_wait_for_examine"
00:12:19.571          }
00:12:19.571        ]
00:12:19.571      }
00:12:19.571    ]
00:12:19.571  }
00:12:19.571  [2024-11-18 05:57:40.464209] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:19.571  [2024-11-18 05:57:40.464400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84130 ]
00:12:19.832  [2024-11-18 05:57:40.618644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:19.832  [2024-11-18 05:57:40.640452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:19.832  
[2024-11-18T05:57:41.069Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:20.091  
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:20.091   05:57:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:20.659   05:57:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62
00:12:20.659    05:57:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:20.659    05:57:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:20.659    05:57:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:20.659  {
00:12:20.659    "subsystems": [
00:12:20.659      {
00:12:20.659        "subsystem": "bdev",
00:12:20.659        "config": [
00:12:20.659          {
00:12:20.659            "params": {
00:12:20.659              "trtype": "pcie",
00:12:20.659              "traddr": "0000:00:10.0",
00:12:20.659              "name": "Nvme0"
00:12:20.659            },
00:12:20.659            "method": "bdev_nvme_attach_controller"
00:12:20.659          },
00:12:20.659          {
00:12:20.659            "method": "bdev_wait_for_examine"
00:12:20.659          }
00:12:20.659        ]
00:12:20.659      }
00:12:20.659    ]
00:12:20.659  }
00:12:20.659  [2024-11-18 05:57:41.537815] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:20.659  [2024-11-18 05:57:41.538043] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84149 ]
00:12:20.918  [2024-11-18 05:57:41.694196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:20.919  [2024-11-18 05:57:41.714812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:20.919  
[2024-11-18T05:57:42.156Z] Copying: 56/56 [kB] (average 27 MBps)
00:12:21.178  
00:12:21.178    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:21.178   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62
00:12:21.178    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:21.178    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:21.178  {
00:12:21.178    "subsystems": [
00:12:21.178      {
00:12:21.178        "subsystem": "bdev",
00:12:21.178        "config": [
00:12:21.178          {
00:12:21.178            "params": {
00:12:21.178              "trtype": "pcie",
00:12:21.178              "traddr": "0000:00:10.0",
00:12:21.178              "name": "Nvme0"
00:12:21.178            },
00:12:21.178            "method": "bdev_nvme_attach_controller"
00:12:21.178          },
00:12:21.178          {
00:12:21.178            "method": "bdev_wait_for_examine"
00:12:21.178          }
00:12:21.178        ]
00:12:21.178      }
00:12:21.178    ]
00:12:21.178  }
00:12:21.178  [2024-11-18 05:57:42.088277] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:21.178  [2024-11-18 05:57:42.088487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84168 ]
00:12:21.438  [2024-11-18 05:57:42.241712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:21.438  [2024-11-18 05:57:42.262041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:21.438  
[2024-11-18T05:57:42.675Z] Copying: 56/56 [kB] (average 27 MBps)
00:12:21.698  
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:21.698   05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:21.698    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:21.698    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:21.698    05:57:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:21.698  {
00:12:21.698    "subsystems": [
00:12:21.698      {
00:12:21.698        "subsystem": "bdev",
00:12:21.698        "config": [
00:12:21.698          {
00:12:21.698            "params": {
00:12:21.698              "trtype": "pcie",
00:12:21.698              "traddr": "0000:00:10.0",
00:12:21.698              "name": "Nvme0"
00:12:21.698            },
00:12:21.698            "method": "bdev_nvme_attach_controller"
00:12:21.698          },
00:12:21.698          {
00:12:21.698            "method": "bdev_wait_for_examine"
00:12:21.698          }
00:12:21.698        ]
00:12:21.698      }
00:12:21.698    ]
00:12:21.698  }
00:12:21.698  [2024-11-18 05:57:42.637885] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:21.698  [2024-11-18 05:57:42.638102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84177 ]
00:12:21.957  [2024-11-18 05:57:42.790056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:21.957  [2024-11-18 05:57:42.809661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:21.957  
[2024-11-18T05:57:43.194Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:22.216  
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:22.216   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:22.784   05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62
00:12:22.784    05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:22.784    05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:22.784    05:57:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:22.784  {
00:12:22.784    "subsystems": [
00:12:22.784      {
00:12:22.784        "subsystem": "bdev",
00:12:22.784        "config": [
00:12:22.784          {
00:12:22.784            "params": {
00:12:22.784              "trtype": "pcie",
00:12:22.784              "traddr": "0000:00:10.0",
00:12:22.784              "name": "Nvme0"
00:12:22.784            },
00:12:22.784            "method": "bdev_nvme_attach_controller"
00:12:22.784          },
00:12:22.784          {
00:12:22.784            "method": "bdev_wait_for_examine"
00:12:22.784          }
00:12:22.784        ]
00:12:22.784      }
00:12:22.784    ]
00:12:22.784  }
00:12:22.784  [2024-11-18 05:57:43.726006] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:22.784  [2024-11-18 05:57:43.726210] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84196 ]
00:12:23.043  [2024-11-18 05:57:43.882469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.043  [2024-11-18 05:57:43.903045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.302  
[2024-11-18T05:57:44.280Z] Copying: 56/56 [kB] (average 54 MBps)
00:12:23.302  
00:12:23.302   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62
00:12:23.302    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:23.302    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:23.302    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:23.302  {
00:12:23.302    "subsystems": [
00:12:23.302      {
00:12:23.302        "subsystem": "bdev",
00:12:23.302        "config": [
00:12:23.302          {
00:12:23.302            "params": {
00:12:23.302              "trtype": "pcie",
00:12:23.302              "traddr": "0000:00:10.0",
00:12:23.302              "name": "Nvme0"
00:12:23.302            },
00:12:23.302            "method": "bdev_nvme_attach_controller"
00:12:23.302          },
00:12:23.302          {
00:12:23.302            "method": "bdev_wait_for_examine"
00:12:23.302          }
00:12:23.302        ]
00:12:23.302      }
00:12:23.302    ]
00:12:23.302  }
00:12:23.302  [2024-11-18 05:57:44.275458] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:23.302  [2024-11-18 05:57:44.275650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84215 ]
00:12:23.562  [2024-11-18 05:57:44.428316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.562  [2024-11-18 05:57:44.447663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.821  
[2024-11-18T05:57:44.799Z] Copying: 56/56 [kB] (average 54 MBps)
00:12:23.821  
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:23.821   05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:23.821    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:23.821    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:23.821    05:57:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:23.821  {
00:12:23.821    "subsystems": [
00:12:23.821      {
00:12:23.821        "subsystem": "bdev",
00:12:23.821        "config": [
00:12:23.821          {
00:12:23.821            "params": {
00:12:23.821              "trtype": "pcie",
00:12:23.821              "traddr": "0000:00:10.0",
00:12:23.821              "name": "Nvme0"
00:12:23.821            },
00:12:23.821            "method": "bdev_nvme_attach_controller"
00:12:23.821          },
00:12:23.821          {
00:12:23.821            "method": "bdev_wait_for_examine"
00:12:23.821          }
00:12:23.821        ]
00:12:23.821      }
00:12:23.821    ]
00:12:23.821  }
00:12:24.080  [2024-11-18 05:57:44.812899] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:24.080  [2024-11-18 05:57:44.813070] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84224 ]
00:12:24.080  [2024-11-18 05:57:44.966979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:24.080  [2024-11-18 05:57:44.989854] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:24.339  
[2024-11-18T05:57:45.317Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:12:24.339  
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:24.339   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:24.907   05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62
00:12:24.907    05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:24.907    05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:24.907    05:57:45 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:24.907  {
00:12:24.907    "subsystems": [
00:12:24.907      {
00:12:24.907        "subsystem": "bdev",
00:12:24.907        "config": [
00:12:24.907          {
00:12:24.907            "params": {
00:12:24.907              "trtype": "pcie",
00:12:24.907              "traddr": "0000:00:10.0",
00:12:24.907              "name": "Nvme0"
00:12:24.907            },
00:12:24.907            "method": "bdev_nvme_attach_controller"
00:12:24.907          },
00:12:24.907          {
00:12:24.907            "method": "bdev_wait_for_examine"
00:12:24.907          }
00:12:24.907        ]
00:12:24.907      }
00:12:24.907    ]
00:12:24.907  }
00:12:24.907  [2024-11-18 05:57:45.834003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:24.907  [2024-11-18 05:57:45.834208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84243 ]
00:12:25.165  [2024-11-18 05:57:45.989699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:25.165  [2024-11-18 05:57:46.012590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:25.165  
[2024-11-18T05:57:46.402Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:25.424  
00:12:25.424   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62
00:12:25.424    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:25.424    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:25.424    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:25.424  {
00:12:25.424    "subsystems": [
00:12:25.424      {
00:12:25.424        "subsystem": "bdev",
00:12:25.424        "config": [
00:12:25.424          {
00:12:25.424            "params": {
00:12:25.424              "trtype": "pcie",
00:12:25.424              "traddr": "0000:00:10.0",
00:12:25.424              "name": "Nvme0"
00:12:25.424            },
00:12:25.424            "method": "bdev_nvme_attach_controller"
00:12:25.424          },
00:12:25.424          {
00:12:25.424            "method": "bdev_wait_for_examine"
00:12:25.424          }
00:12:25.424        ]
00:12:25.424      }
00:12:25.424    ]
00:12:25.424  }
00:12:25.424  [2024-11-18 05:57:46.378177] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:25.424  [2024-11-18 05:57:46.378444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84262 ]
00:12:25.683  [2024-11-18 05:57:46.532886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:25.683  [2024-11-18 05:57:46.552703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:25.941  
[2024-11-18T05:57:46.919Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:25.941  
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:25.941   05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:25.941    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:25.941    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:25.941    05:57:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:25.941  {
00:12:25.941    "subsystems": [
00:12:25.941      {
00:12:25.941        "subsystem": "bdev",
00:12:25.941        "config": [
00:12:25.941          {
00:12:25.941            "params": {
00:12:25.941              "trtype": "pcie",
00:12:25.941              "traddr": "0000:00:10.0",
00:12:25.941              "name": "Nvme0"
00:12:25.941            },
00:12:25.941            "method": "bdev_nvme_attach_controller"
00:12:25.941          },
00:12:25.941          {
00:12:25.941            "method": "bdev_wait_for_examine"
00:12:25.941          }
00:12:25.941        ]
00:12:25.941      }
00:12:25.941    ]
00:12:25.941  }
00:12:26.200  [2024-11-18 05:57:46.922477] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:26.200  [2024-11-18 05:57:46.922685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84271 ]
00:12:26.200  [2024-11-18 05:57:47.071940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:26.200  [2024-11-18 05:57:47.092718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:26.459  
[2024-11-18T05:57:47.437Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:26.459  
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:26.459   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:27.027   05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62
00:12:27.027    05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:27.027    05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:27.027    05:57:47 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:27.027  {
00:12:27.027    "subsystems": [
00:12:27.027      {
00:12:27.027        "subsystem": "bdev",
00:12:27.027        "config": [
00:12:27.027          {
00:12:27.027            "params": {
00:12:27.027              "trtype": "pcie",
00:12:27.027              "traddr": "0000:00:10.0",
00:12:27.027              "name": "Nvme0"
00:12:27.027            },
00:12:27.027            "method": "bdev_nvme_attach_controller"
00:12:27.027          },
00:12:27.027          {
00:12:27.027            "method": "bdev_wait_for_examine"
00:12:27.027          }
00:12:27.027        ]
00:12:27.027      }
00:12:27.027    ]
00:12:27.027  }
00:12:27.027  [2024-11-18 05:57:47.960600] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:27.027  [2024-11-18 05:57:47.960835] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84290 ]
00:12:27.286  [2024-11-18 05:57:48.113820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.286  [2024-11-18 05:57:48.135141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:27.286  
[2024-11-18T05:57:48.523Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:27.545  
00:12:27.545   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62
00:12:27.545    05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:27.545    05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:27.545    05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:27.545  {
00:12:27.545    "subsystems": [
00:12:27.545      {
00:12:27.545        "subsystem": "bdev",
00:12:27.545        "config": [
00:12:27.545          {
00:12:27.545            "params": {
00:12:27.545              "trtype": "pcie",
00:12:27.545              "traddr": "0000:00:10.0",
00:12:27.545              "name": "Nvme0"
00:12:27.545            },
00:12:27.545            "method": "bdev_nvme_attach_controller"
00:12:27.545          },
00:12:27.545          {
00:12:27.545            "method": "bdev_wait_for_examine"
00:12:27.545          }
00:12:27.545        ]
00:12:27.545      }
00:12:27.545    ]
00:12:27.545  }
00:12:27.545  [2024-11-18 05:57:48.509882] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:27.545  [2024-11-18 05:57:48.510074] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84308 ]
00:12:27.804  [2024-11-18 05:57:48.665420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:27.804  [2024-11-18 05:57:48.688008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:28.063  
[2024-11-18T05:57:49.041Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:28.063  
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:28.063   05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:28.063    05:57:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:28.063    05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:28.063    05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:28.063  {
00:12:28.063    "subsystems": [
00:12:28.063      {
00:12:28.063        "subsystem": "bdev",
00:12:28.063        "config": [
00:12:28.063          {
00:12:28.063            "params": {
00:12:28.063              "trtype": "pcie",
00:12:28.063              "traddr": "0000:00:10.0",
00:12:28.063              "name": "Nvme0"
00:12:28.063            },
00:12:28.063            "method": "bdev_nvme_attach_controller"
00:12:28.063          },
00:12:28.063          {
00:12:28.063            "method": "bdev_wait_for_examine"
00:12:28.063          }
00:12:28.063        ]
00:12:28.063      }
00:12:28.063    ]
00:12:28.063  }
00:12:28.322  [2024-11-18 05:57:49.060966] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:28.322  [2024-11-18 05:57:49.061182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84318 ]
00:12:28.322  [2024-11-18 05:57:49.217944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:28.322  [2024-11-18 05:57:49.238327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:28.579  
[2024-11-18T05:57:49.557Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:28.579  
00:12:28.837  
00:12:28.838  real	0m13.070s
00:12:28.838  user	0m8.286s
00:12:28.838  sys	0m3.039s
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:28.838  ************************************
00:12:28.838  END TEST dd_rw
00:12:28.838  ************************************
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:28.838  ************************************
00:12:28.838  START TEST dd_rw_offset
00:12:28.838  ************************************
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 ))
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=fht9b1xlophrdpp5hg9bcuaeqbou01067qrgc7vy3g7f1ggd5thix2mtkcwckgses3rf42xz3t1pvuk8dgezhg3t1kfw6zr5q6rwurmu8gxvr1o5plmeh7in4r4jvnhogpacwz6i5k697ybd9a67ez7qq3a3s3vxz7zp7sz3wrjeft1bjak2w51nq0ep2v7a5ybiplqbhy78g65fa4515bufuq2h2d1s2f19ok2az0km8hum5qxmk31kmz5ciia0h37tiq4p61aohbur6m6meaoyx93zfuwcn9tdy9hdshwn25qs19vktjt6zqu8bz917mdzn3cnpf4848u8pd4j8e5evowngqxkrn1pipzx39i9wwrdyc7hzjp3vgwzbeds7w93rshdlrfhh4um6y5dtc8v0jefu7nys0gytxrxx2gx8nzezf69l9q97qx8pwihutpktkx70cznx09uu3k7adl95lj0pvym0ct9augrckw85vgekrl2rgny8caqy0w2hez88uycppxvnifeapjeft5lbjmg92ucch1ecjgcgyup658iy71zs04jz9d77hbf6sc6oe6xu9h908v7g75wtatw4uojlkywqfcslsngtake966eca1ffymkk946lwyxx86fdv6p8lsi37eec13fqz6fl930issf30br4e6yg9m8ybv5ah5ahiengz3g1i1hry5z3pjb830yw6ylcf4h3obp085dland5fvww39y5rl8q56biqqyj4aon0qihoynpc6ibj0ulcbbfhqqd5hipnkau80zri9wxbvzmo2c9r4wlqf56qpm4c826ea91j06mfy9x0p11qmqjfjovaqe7u1cy1a8ranumschl5l2p5zb69v94oups1ga7r4tt6cirw2lxxmepoofsasuyna06aq21xv1letrs1t8gsakc0r1ecsrlfeavkjbi606q12nbqbsitexiltgxa3d116qqwpous9qovehrcx2kmjfhx41izbg0y15q856p517jwe8vtfkftcdg2qoyxw1tpqw3559c5v7w0xd4aebh7kxkn8fgtsxpxsae2akak8fi2qmdbbgmrv0zpgp9t4f27no3kthxods7uuzwf3sqrd1iyrj4noz7s4a7v6qzjehkojxpandcdjapnt3e7qtqoh189nihpcy26ifr0x9nxhh5qd6yq1wvw5lcc0b2h5usbi8m4mmv5qg5gv149ca4y4nsk8eao4ko3729jhbxldxk2042g0v861piuujeaankw68qqrykcq07e53gfiys7eaci9zylaf4bbiyegqq803mh7kfkyu2wgiic210xcujnsqu2jzf1aa4pzwgiq02fk146i55foww94lvq32m4nueef7odegng3zw7c7kiej41kkhyd62hk3cl9ggmvsgv6v7l7np8egphn98vw96j8y0mf10vl5fhl1mr8t9vt7dh9fthi50ognyox2et1wbkxmc5xq9yom2uvkqa5dbxpdq0hce6unajxgqqmn9rv3apty6pputnt2u4o8wxiqs8tgwb0fs0vvhzembyzgs5zflqis12lku8p9obu5s8hlvdeu3aefsdrdso0s8gpfg9id5petf56f79gfiqdk7yivihsg1swrjlom9dn1k95ri7hz4x6x41ccog90opn2fbxkhoswy5tf3wd2apgpqcwrzb3f0hwoe7w5jmsesm6ldtcr5jsuhcsngkqs8c7xexjzvbipvxsd8sm1k6n0b6yf5gtyz0x2i75sd3c40a1fcg2st2kocad6zu1dzef71ykrkorpvpfcs2q72hqikl28jebdf5a7wy55b7m33e58bjmlq1mj1cm3c30b15ofb45di0fnelznzaiegk2gead20kivdy4sq6bgbf75vme8w5fvvuiuueb48ni3tg1s7tij9m7qawe6vvn3g6zhzuu5inr0avrfmum7rc2nktznumpc0fwvgny62t25l1owf53j0lsux2y34qt2vkumq3t8tv5dcooj9qz4urbitblmfzkrvw412rd7dqjucznewaema5dr7rhpkll37o8n44qreooqnjb7gisk6y2y0rdv1holoe41xw5qk7ivx3k1uy9437n9m5cc71v00kd5kx4km6j7hv62jjx3gr95esw44s4553lajb7lcnfea4vut3zpa5cj9xs0r9uw8psqj5yvhl8qjkpz84nm8gstf1nm3mz6opv0w3i3rgjvga36pdbn1bs4qmi3vc7i4vbd6ygwryv7d54exzv80hneeyb9w7gvnu55qh0x6oojqejt7tijen6r4b9uooj2tdgkdhgm87syndtfvqaknn8hqjkmp2l7c2dsx2ye4vz5tcm03r6b4srm64afi0cty72i5up837hvd1y2qnss0er1lgndcnj2br62myvsn4m4jelgj1vr469g89naopnd18r680zz98q5o3u9fs0ybklrl3k6q9drdmgx8dadbil8ntuyaz5oi2uialonbay2t30029jdybjy9ftfj7acm0cxahcjqxb8orbm2eszcz6msqnal5idm3yun4cl5xdt02cxfvl55dcyrj01yh3qxflj7p5v9nk2vt9cdioxz8icszpcbblh9lfuhwma4s6te1pvywqntlq3mi1h2jjhxmgqfhoawr8ofof1aywdauryy770or3jor2stnv015zg5ytk2dpduwtvew75naczgwibqsgahgs3y0478jbadsnkxopq5l3c9w2xwtygsarv57j3q188xuiiaeuyeh4763t2fe0ymye9devunzv1cg62k3ybzdw5hd6htp7z6rw4smqrlztgjo1ql58tegbmise06nzzcsjbb5iwwh3969f2qjl8k6nbmyebf0p845o0avredfxuvgu42agh6bc5qc2leacnflz21qdcl5j82bny7la3z2o4cm1djru9y83h1k4qcu1ir4w9er3uxkvsb3v2ha6fpq4i1d6ty84f1rhjsqkp57v4zivvrwgysk0xtd719il92lly5vlnfelk1g7t9zz27fu9m4hxws17d763gpgrv2mpckt0foqklas737viw02g0iwr0o2az9k55t0jzgr7a3o14c074k13qfk4j3jr132ruuqm3gm5ykeif5taz5y57be81rmu46177uuepidyzc3j6wed0hm98zjrulafmph5kcfqa8m5f9gqlprbfksdvtrmf21b5y46huhtr1va0f3vgg0a5hxwuqrbn7az9izu5i7vcl44ynql5shpzpgdrwxha7in38sqnhu2qfsuneqn7oyvlnz2ac6pt7sgoc98tl0x0iyu1inigbpn9g52wjchtf6dwpy2tx4zkg9d7x4zr90ism6z3qu7jz7asxqj9zgrneiewsq6h52j1dk9e1k5atac730p5eytfj8y3erqq001rj0xhf1kwz7ij3wme1n9tk173begu4z2719g42gp0ojswgdgexu9g4qgx5p52gadhj8c6yil74dx0pjxaxhcl5v61k8awwtujrotmbbcvzmrharxf7wp5gtt0hf73hvtqbgesixbzqbbhfp9kzdmf0gz8xlpupygu7qoxvlbwxvg0joxma489e5oufovk1xkp6kbfmz5z474knj2b8z4x1uz0555wivcq2r4ywrpqlth8ymp44f9kd1k93v4w4bfm9gz0sonv6l365pd1eoz3nvh98i9ggi2jlyh073g7z4iz73f4p1vvdd9bbzfwv9vfgqtu0jdlf4dl4hevjsa7jnj9cj6arkosisuufkpu0ggj5vxnkp99ym2xcsqrucb6q6hv4qx1v4g2z1j2ib3wyy2e2ls39vsm3wk0su9amldlor5o5hchohsz3r5r6tl15wdbc8g4bjacutqaodnkqy65o1o0xorfl2q40m6lpmwvzf75hvkkbxkwcrrish4bhj9msq76s97rfz5mqlyy13e4yadjs7oxxnje3f2pq4pysrs46o7rz54r13bfyf0pf6pmk7p8eu2ttvwy9j5za2gt9egrwod3mpj7652l087j7p5mx9q95wu25a1
00:12:28.838   05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62
00:12:28.838    05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf
00:12:28.838    05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:12:28.838    05:57:49 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:12:28.838  {
00:12:28.838    "subsystems": [
00:12:28.838      {
00:12:28.838        "subsystem": "bdev",
00:12:28.838        "config": [
00:12:28.838          {
00:12:28.838            "params": {
00:12:28.838              "trtype": "pcie",
00:12:28.838              "traddr": "0000:00:10.0",
00:12:28.838              "name": "Nvme0"
00:12:28.838            },
00:12:28.838            "method": "bdev_nvme_attach_controller"
00:12:28.838          },
00:12:28.838          {
00:12:28.838            "method": "bdev_wait_for_examine"
00:12:28.838          }
00:12:28.838        ]
00:12:28.838      }
00:12:28.838    ]
00:12:28.838  }
00:12:28.838  [2024-11-18 05:57:49.732134] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:28.838  [2024-11-18 05:57:49.732320] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84354 ]
00:12:29.097  [2024-11-18 05:57:49.885690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:29.097  [2024-11-18 05:57:49.905581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:29.097  
[2024-11-18T05:57:50.333Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:12:29.355  
00:12:29.355    05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf
00:12:29.355   05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62
00:12:29.355    05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:12:29.355    05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:12:29.355  {
00:12:29.355    "subsystems": [
00:12:29.355      {
00:12:29.355        "subsystem": "bdev",
00:12:29.355        "config": [
00:12:29.355          {
00:12:29.355            "params": {
00:12:29.355              "trtype": "pcie",
00:12:29.355              "traddr": "0000:00:10.0",
00:12:29.355              "name": "Nvme0"
00:12:29.355            },
00:12:29.355            "method": "bdev_nvme_attach_controller"
00:12:29.355          },
00:12:29.355          {
00:12:29.355            "method": "bdev_wait_for_examine"
00:12:29.355          }
00:12:29.355        ]
00:12:29.355      }
00:12:29.355    ]
00:12:29.355  }
00:12:29.355  [2024-11-18 05:57:50.282194] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:29.355  [2024-11-18 05:57:50.282400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84368 ]
00:12:29.614  [2024-11-18 05:57:50.439403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:29.614  [2024-11-18 05:57:50.461445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:29.615  
[2024-11-18T05:57:50.853Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:12:29.875  
00:12:29.875   05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check
00:12:29.875   05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ fht9b1xlophrdpp5hg9bcuaeqbou01067qrgc7vy3g7f1ggd5thix2mtkcwckgses3rf42xz3t1pvuk8dgezhg3t1kfw6zr5q6rwurmu8gxvr1o5plmeh7in4r4jvnhogpacwz6i5k697ybd9a67ez7qq3a3s3vxz7zp7sz3wrjeft1bjak2w51nq0ep2v7a5ybiplqbhy78g65fa4515bufuq2h2d1s2f19ok2az0km8hum5qxmk31kmz5ciia0h37tiq4p61aohbur6m6meaoyx93zfuwcn9tdy9hdshwn25qs19vktjt6zqu8bz917mdzn3cnpf4848u8pd4j8e5evowngqxkrn1pipzx39i9wwrdyc7hzjp3vgwzbeds7w93rshdlrfhh4um6y5dtc8v0jefu7nys0gytxrxx2gx8nzezf69l9q97qx8pwihutpktkx70cznx09uu3k7adl95lj0pvym0ct9augrckw85vgekrl2rgny8caqy0w2hez88uycppxvnifeapjeft5lbjmg92ucch1ecjgcgyup658iy71zs04jz9d77hbf6sc6oe6xu9h908v7g75wtatw4uojlkywqfcslsngtake966eca1ffymkk946lwyxx86fdv6p8lsi37eec13fqz6fl930issf30br4e6yg9m8ybv5ah5ahiengz3g1i1hry5z3pjb830yw6ylcf4h3obp085dland5fvww39y5rl8q56biqqyj4aon0qihoynpc6ibj0ulcbbfhqqd5hipnkau80zri9wxbvzmo2c9r4wlqf56qpm4c826ea91j06mfy9x0p11qmqjfjovaqe7u1cy1a8ranumschl5l2p5zb69v94oups1ga7r4tt6cirw2lxxmepoofsasuyna06aq21xv1letrs1t8gsakc0r1ecsrlfeavkjbi606q12nbqbsitexiltgxa3d116qqwpous9qovehrcx2kmjfhx41izbg0y15q856p517jwe8vtfkftcdg2qoyxw1tpqw3559c5v7w0xd4aebh7kxkn8fgtsxpxsae2akak8fi2qmdbbgmrv0zpgp9t4f27no3kthxods7uuzwf3sqrd1iyrj4noz7s4a7v6qzjehkojxpandcdjapnt3e7qtqoh189nihpcy26ifr0x9nxhh5qd6yq1wvw5lcc0b2h5usbi8m4mmv5qg5gv149ca4y4nsk8eao4ko3729jhbxldxk2042g0v861piuujeaankw68qqrykcq07e53gfiys7eaci9zylaf4bbiyegqq803mh7kfkyu2wgiic210xcujnsqu2jzf1aa4pzwgiq02fk146i55foww94lvq32m4nueef7odegng3zw7c7kiej41kkhyd62hk3cl9ggmvsgv6v7l7np8egphn98vw96j8y0mf10vl5fhl1mr8t9vt7dh9fthi50ognyox2et1wbkxmc5xq9yom2uvkqa5dbxpdq0hce6unajxgqqmn9rv3apty6pputnt2u4o8wxiqs8tgwb0fs0vvhzembyzgs5zflqis12lku8p9obu5s8hlvdeu3aefsdrdso0s8gpfg9id5petf56f79gfiqdk7yivihsg1swrjlom9dn1k95ri7hz4x6x41ccog90opn2fbxkhoswy5tf3wd2apgpqcwrzb3f0hwoe7w5jmsesm6ldtcr5jsuhcsngkqs8c7xexjzvbipvxsd8sm1k6n0b6yf5gtyz0x2i75sd3c40a1fcg2st2kocad6zu1dzef71ykrkorpvpfcs2q72hqikl28jebdf5a7wy55b7m33e58bjmlq1mj1cm3c30b15ofb45di0fnelznzaiegk2gead20kivdy4sq6bgbf75vme8w5fvvuiuueb48ni3tg1s7tij9m7qawe6vvn3g6zhzuu5inr0avrfmum7rc2nktznumpc0fwvgny62t25l1owf53j0lsux2y34qt2vkumq3t8tv5dcooj9qz4urbitblmfzkrvw412rd7dqjucznewaema5dr7rhpkll37o8n44qreooqnjb7gisk6y2y0rdv1holoe41xw5qk7ivx3k1uy9437n9m5cc71v00kd5kx4km6j7hv62jjx3gr95esw44s4553lajb7lcnfea4vut3zpa5cj9xs0r9uw8psqj5yvhl8qjkpz84nm8gstf1nm3mz6opv0w3i3rgjvga36pdbn1bs4qmi3vc7i4vbd6ygwryv7d54exzv80hneeyb9w7gvnu55qh0x6oojqejt7tijen6r4b9uooj2tdgkdhgm87syndtfvqaknn8hqjkmp2l7c2dsx2ye4vz5tcm03r6b4srm64afi0cty72i5up837hvd1y2qnss0er1lgndcnj2br62myvsn4m4jelgj1vr469g89naopnd18r680zz98q5o3u9fs0ybklrl3k6q9drdmgx8dadbil8ntuyaz5oi2uialonbay2t30029jdybjy9ftfj7acm0cxahcjqxb8orbm2eszcz6msqnal5idm3yun4cl5xdt02cxfvl55dcyrj01yh3qxflj7p5v9nk2vt9cdioxz8icszpcbblh9lfuhwma4s6te1pvywqntlq3mi1h2jjhxmgqfhoawr8ofof1aywdauryy770or3jor2stnv015zg5ytk2dpduwtvew75naczgwibqsgahgs3y0478jbadsnkxopq5l3c9w2xwtygsarv57j3q188xuiiaeuyeh4763t2fe0ymye9devunzv1cg62k3ybzdw5hd6htp7z6rw4smqrlztgjo1ql58tegbmise06nzzcsjbb5iwwh3969f2qjl8k6nbmyebf0p845o0avredfxuvgu42agh6bc5qc2leacnflz21qdcl5j82bny7la3z2o4cm1djru9y83h1k4qcu1ir4w9er3uxkvsb3v2ha6fpq4i1d6ty84f1rhjsqkp57v4zivvrwgysk0xtd719il92lly5vlnfelk1g7t9zz27fu9m4hxws17d763gpgrv2mpckt0foqklas737viw02g0iwr0o2az9k55t0jzgr7a3o14c074k13qfk4j3jr132ruuqm3gm5ykeif5taz5y57be81rmu46177uuepidyzc3j6wed0hm98zjrulafmph5kcfqa8m5f9gqlprbfksdvtrmf21b5y46huhtr1va0f3vgg0a5hxwuqrbn7az9izu5i7vcl44ynql5shpzpgdrwxha7in38sqnhu2qfsuneqn7oyvlnz2ac6pt7sgoc98tl0x0iyu1inigbpn9g52wjchtf6dwpy2tx4zkg9d7x4zr90ism6z3qu7jz7asxqj9zgrneiewsq6h52j1dk9e1k5atac730p5eytfj8y3erqq001rj0xhf1kwz7ij3wme1n9tk173begu4z2719g42gp0ojswgdgexu9g4qgx5p52gadhj8c6yil74dx0pjxaxhcl5v61k8awwtujrotmbbcvzmrharxf7wp5gtt0hf73hvtqbgesixbzqbbhfp9kzdmf0gz8xlpupygu7qoxvlbwxvg0joxma489e5oufovk1xkp6kbfmz5z474knj2b8z4x1uz0555wivcq2r4ywrpqlth8ymp44f9kd1k93v4w4bfm9gz0sonv6l365pd1eoz3nvh98i9ggi2jlyh073g7z4iz73f4p1vvdd9bbzfwv9vfgqtu0jdlf4dl4hevjsa7jnj9cj6arkosisuufkpu0ggj5vxnkp99ym2xcsqrucb6q6hv4qx1v4g2z1j2ib3wyy2e2ls39vsm3wk0su9amldlor5o5hchohsz3r5r6tl15wdbc8g4bjacutqaodnkqy65o1o0xorfl2q40m6lpmwvzf75hvkkbxkwcrrish4bhj9msq76s97rfz5mqlyy13e4yadjs7oxxnje3f2pq4pysrs46o7rz54r13bfyf0pf6pmk7p8eu2ttvwy9j5za2gt9egrwod3mpj7652l087j7p5mx9q95wu25a1 == \f\h\t\9\b\1\x\l\o\p\h\r\d\p\p\5\h\g\9\b\c\u\a\e\q\b\o\u\0\1\0\6\7\q\r\g\c\7\v\y\3\g\7\f\1\g\g\d\5\t\h\i\x\2\m\t\k\c\w\c\k\g\s\e\s\3\r\f\4\2\x\z\3\t\1\p\v\u\k\8\d\g\e\z\h\g\3\t\1\k\f\w\6\z\r\5\q\6\r\w\u\r\m\u\8\g\x\v\r\1\o\5\p\l\m\e\h\7\i\n\4\r\4\j\v\n\h\o\g\p\a\c\w\z\6\i\5\k\6\9\7\y\b\d\9\a\6\7\e\z\7\q\q\3\a\3\s\3\v\x\z\7\z\p\7\s\z\3\w\r\j\e\f\t\1\b\j\a\k\2\w\5\1\n\q\0\e\p\2\v\7\a\5\y\b\i\p\l\q\b\h\y\7\8\g\6\5\f\a\4\5\1\5\b\u\f\u\q\2\h\2\d\1\s\2\f\1\9\o\k\2\a\z\0\k\m\8\h\u\m\5\q\x\m\k\3\1\k\m\z\5\c\i\i\a\0\h\3\7\t\i\q\4\p\6\1\a\o\h\b\u\r\6\m\6\m\e\a\o\y\x\9\3\z\f\u\w\c\n\9\t\d\y\9\h\d\s\h\w\n\2\5\q\s\1\9\v\k\t\j\t\6\z\q\u\8\b\z\9\1\7\m\d\z\n\3\c\n\p\f\4\8\4\8\u\8\p\d\4\j\8\e\5\e\v\o\w\n\g\q\x\k\r\n\1\p\i\p\z\x\3\9\i\9\w\w\r\d\y\c\7\h\z\j\p\3\v\g\w\z\b\e\d\s\7\w\9\3\r\s\h\d\l\r\f\h\h\4\u\m\6\y\5\d\t\c\8\v\0\j\e\f\u\7\n\y\s\0\g\y\t\x\r\x\x\2\g\x\8\n\z\e\z\f\6\9\l\9\q\9\7\q\x\8\p\w\i\h\u\t\p\k\t\k\x\7\0\c\z\n\x\0\9\u\u\3\k\7\a\d\l\9\5\l\j\0\p\v\y\m\0\c\t\9\a\u\g\r\c\k\w\8\5\v\g\e\k\r\l\2\r\g\n\y\8\c\a\q\y\0\w\2\h\e\z\8\8\u\y\c\p\p\x\v\n\i\f\e\a\p\j\e\f\t\5\l\b\j\m\g\9\2\u\c\c\h\1\e\c\j\g\c\g\y\u\p\6\5\8\i\y\7\1\z\s\0\4\j\z\9\d\7\7\h\b\f\6\s\c\6\o\e\6\x\u\9\h\9\0\8\v\7\g\7\5\w\t\a\t\w\4\u\o\j\l\k\y\w\q\f\c\s\l\s\n\g\t\a\k\e\9\6\6\e\c\a\1\f\f\y\m\k\k\9\4\6\l\w\y\x\x\8\6\f\d\v\6\p\8\l\s\i\3\7\e\e\c\1\3\f\q\z\6\f\l\9\3\0\i\s\s\f\3\0\b\r\4\e\6\y\g\9\m\8\y\b\v\5\a\h\5\a\h\i\e\n\g\z\3\g\1\i\1\h\r\y\5\z\3\p\j\b\8\3\0\y\w\6\y\l\c\f\4\h\3\o\b\p\0\8\5\d\l\a\n\d\5\f\v\w\w\3\9\y\5\r\l\8\q\5\6\b\i\q\q\y\j\4\a\o\n\0\q\i\h\o\y\n\p\c\6\i\b\j\0\u\l\c\b\b\f\h\q\q\d\5\h\i\p\n\k\a\u\8\0\z\r\i\9\w\x\b\v\z\m\o\2\c\9\r\4\w\l\q\f\5\6\q\p\m\4\c\8\2\6\e\a\9\1\j\0\6\m\f\y\9\x\0\p\1\1\q\m\q\j\f\j\o\v\a\q\e\7\u\1\c\y\1\a\8\r\a\n\u\m\s\c\h\l\5\l\2\p\5\z\b\6\9\v\9\4\o\u\p\s\1\g\a\7\r\4\t\t\6\c\i\r\w\2\l\x\x\m\e\p\o\o\f\s\a\s\u\y\n\a\0\6\a\q\2\1\x\v\1\l\e\t\r\s\1\t\8\g\s\a\k\c\0\r\1\e\c\s\r\l\f\e\a\v\k\j\b\i\6\0\6\q\1\2\n\b\q\b\s\i\t\e\x\i\l\t\g\x\a\3\d\1\1\6\q\q\w\p\o\u\s\9\q\o\v\e\h\r\c\x\2\k\m\j\f\h\x\4\1\i\z\b\g\0\y\1\5\q\8\5\6\p\5\1\7\j\w\e\8\v\t\f\k\f\t\c\d\g\2\q\o\y\x\w\1\t\p\q\w\3\5\5\9\c\5\v\7\w\0\x\d\4\a\e\b\h\7\k\x\k\n\8\f\g\t\s\x\p\x\s\a\e\2\a\k\a\k\8\f\i\2\q\m\d\b\b\g\m\r\v\0\z\p\g\p\9\t\4\f\2\7\n\o\3\k\t\h\x\o\d\s\7\u\u\z\w\f\3\s\q\r\d\1\i\y\r\j\4\n\o\z\7\s\4\a\7\v\6\q\z\j\e\h\k\o\j\x\p\a\n\d\c\d\j\a\p\n\t\3\e\7\q\t\q\o\h\1\8\9\n\i\h\p\c\y\2\6\i\f\r\0\x\9\n\x\h\h\5\q\d\6\y\q\1\w\v\w\5\l\c\c\0\b\2\h\5\u\s\b\i\8\m\4\m\m\v\5\q\g\5\g\v\1\4\9\c\a\4\y\4\n\s\k\8\e\a\o\4\k\o\3\7\2\9\j\h\b\x\l\d\x\k\2\0\4\2\g\0\v\8\6\1\p\i\u\u\j\e\a\a\n\k\w\6\8\q\q\r\y\k\c\q\0\7\e\5\3\g\f\i\y\s\7\e\a\c\i\9\z\y\l\a\f\4\b\b\i\y\e\g\q\q\8\0\3\m\h\7\k\f\k\y\u\2\w\g\i\i\c\2\1\0\x\c\u\j\n\s\q\u\2\j\z\f\1\a\a\4\p\z\w\g\i\q\0\2\f\k\1\4\6\i\5\5\f\o\w\w\9\4\l\v\q\3\2\m\4\n\u\e\e\f\7\o\d\e\g\n\g\3\z\w\7\c\7\k\i\e\j\4\1\k\k\h\y\d\6\2\h\k\3\c\l\9\g\g\m\v\s\g\v\6\v\7\l\7\n\p\8\e\g\p\h\n\9\8\v\w\9\6\j\8\y\0\m\f\1\0\v\l\5\f\h\l\1\m\r\8\t\9\v\t\7\d\h\9\f\t\h\i\5\0\o\g\n\y\o\x\2\e\t\1\w\b\k\x\m\c\5\x\q\9\y\o\m\2\u\v\k\q\a\5\d\b\x\p\d\q\0\h\c\e\6\u\n\a\j\x\g\q\q\m\n\9\r\v\3\a\p\t\y\6\p\p\u\t\n\t\2\u\4\o\8\w\x\i\q\s\8\t\g\w\b\0\f\s\0\v\v\h\z\e\m\b\y\z\g\s\5\z\f\l\q\i\s\1\2\l\k\u\8\p\9\o\b\u\5\s\8\h\l\v\d\e\u\3\a\e\f\s\d\r\d\s\o\0\s\8\g\p\f\g\9\i\d\5\p\e\t\f\5\6\f\7\9\g\f\i\q\d\k\7\y\i\v\i\h\s\g\1\s\w\r\j\l\o\m\9\d\n\1\k\9\5\r\i\7\h\z\4\x\6\x\4\1\c\c\o\g\9\0\o\p\n\2\f\b\x\k\h\o\s\w\y\5\t\f\3\w\d\2\a\p\g\p\q\c\w\r\z\b\3\f\0\h\w\o\e\7\w\5\j\m\s\e\s\m\6\l\d\t\c\r\5\j\s\u\h\c\s\n\g\k\q\s\8\c\7\x\e\x\j\z\v\b\i\p\v\x\s\d\8\s\m\1\k\6\n\0\b\6\y\f\5\g\t\y\z\0\x\2\i\7\5\s\d\3\c\4\0\a\1\f\c\g\2\s\t\2\k\o\c\a\d\6\z\u\1\d\z\e\f\7\1\y\k\r\k\o\r\p\v\p\f\c\s\2\q\7\2\h\q\i\k\l\2\8\j\e\b\d\f\5\a\7\w\y\5\5\b\7\m\3\3\e\5\8\b\j\m\l\q\1\m\j\1\c\m\3\c\3\0\b\1\5\o\f\b\4\5\d\i\0\f\n\e\l\z\n\z\a\i\e\g\k\2\g\e\a\d\2\0\k\i\v\d\y\4\s\q\6\b\g\b\f\7\5\v\m\e\8\w\5\f\v\v\u\i\u\u\e\b\4\8\n\i\3\t\g\1\s\7\t\i\j\9\m\7\q\a\w\e\6\v\v\n\3\g\6\z\h\z\u\u\5\i\n\r\0\a\v\r\f\m\u\m\7\r\c\2\n\k\t\z\n\u\m\p\c\0\f\w\v\g\n\y\6\2\t\2\5\l\1\o\w\f\5\3\j\0\l\s\u\x\2\y\3\4\q\t\2\v\k\u\m\q\3\t\8\t\v\5\d\c\o\o\j\9\q\z\4\u\r\b\i\t\b\l\m\f\z\k\r\v\w\4\1\2\r\d\7\d\q\j\u\c\z\n\e\w\a\e\m\a\5\d\r\7\r\h\p\k\l\l\3\7\o\8\n\4\4\q\r\e\o\o\q\n\j\b\7\g\i\s\k\6\y\2\y\0\r\d\v\1\h\o\l\o\e\4\1\x\w\5\q\k\7\i\v\x\3\k\1\u\y\9\4\3\7\n\9\m\5\c\c\7\1\v\0\0\k\d\5\k\x\4\k\m\6\j\7\h\v\6\2\j\j\x\3\g\r\9\5\e\s\w\4\4\s\4\5\5\3\l\a\j\b\7\l\c\n\f\e\a\4\v\u\t\3\z\p\a\5\c\j\9\x\s\0\r\9\u\w\8\p\s\q\j\5\y\v\h\l\8\q\j\k\p\z\8\4\n\m\8\g\s\t\f\1\n\m\3\m\z\6\o\p\v\0\w\3\i\3\r\g\j\v\g\a\3\6\p\d\b\n\1\b\s\4\q\m\i\3\v\c\7\i\4\v\b\d\6\y\g\w\r\y\v\7\d\5\4\e\x\z\v\8\0\h\n\e\e\y\b\9\w\7\g\v\n\u\5\5\q\h\0\x\6\o\o\j\q\e\j\t\7\t\i\j\e\n\6\r\4\b\9\u\o\o\j\2\t\d\g\k\d\h\g\m\8\7\s\y\n\d\t\f\v\q\a\k\n\n\8\h\q\j\k\m\p\2\l\7\c\2\d\s\x\2\y\e\4\v\z\5\t\c\m\0\3\r\6\b\4\s\r\m\6\4\a\f\i\0\c\t\y\7\2\i\5\u\p\8\3\7\h\v\d\1\y\2\q\n\s\s\0\e\r\1\l\g\n\d\c\n\j\2\b\r\6\2\m\y\v\s\n\4\m\4\j\e\l\g\j\1\v\r\4\6\9\g\8\9\n\a\o\p\n\d\1\8\r\6\8\0\z\z\9\8\q\5\o\3\u\9\f\s\0\y\b\k\l\r\l\3\k\6\q\9\d\r\d\m\g\x\8\d\a\d\b\i\l\8\n\t\u\y\a\z\5\o\i\2\u\i\a\l\o\n\b\a\y\2\t\3\0\0\2\9\j\d\y\b\j\y\9\f\t\f\j\7\a\c\m\0\c\x\a\h\c\j\q\x\b\8\o\r\b\m\2\e\s\z\c\z\6\m\s\q\n\a\l\5\i\d\m\3\y\u\n\4\c\l\5\x\d\t\0\2\c\x\f\v\l\5\5\d\c\y\r\j\0\1\y\h\3\q\x\f\l\j\7\p\5\v\9\n\k\2\v\t\9\c\d\i\o\x\z\8\i\c\s\z\p\c\b\b\l\h\9\l\f\u\h\w\m\a\4\s\6\t\e\1\p\v\y\w\q\n\t\l\q\3\m\i\1\h\2\j\j\h\x\m\g\q\f\h\o\a\w\r\8\o\f\o\f\1\a\y\w\d\a\u\r\y\y\7\7\0\o\r\3\j\o\r\2\s\t\n\v\0\1\5\z\g\5\y\t\k\2\d\p\d\u\w\t\v\e\w\7\5\n\a\c\z\g\w\i\b\q\s\g\a\h\g\s\3\y\0\4\7\8\j\b\a\d\s\n\k\x\o\p\q\5\l\3\c\9\w\2\x\w\t\y\g\s\a\r\v\5\7\j\3\q\1\8\8\x\u\i\i\a\e\u\y\e\h\4\7\6\3\t\2\f\e\0\y\m\y\e\9\d\e\v\u\n\z\v\1\c\g\6\2\k\3\y\b\z\d\w\5\h\d\6\h\t\p\7\z\6\r\w\4\s\m\q\r\l\z\t\g\j\o\1\q\l\5\8\t\e\g\b\m\i\s\e\0\6\n\z\z\c\s\j\b\b\5\i\w\w\h\3\9\6\9\f\2\q\j\l\8\k\6\n\b\m\y\e\b\f\0\p\8\4\5\o\0\a\v\r\e\d\f\x\u\v\g\u\4\2\a\g\h\6\b\c\5\q\c\2\l\e\a\c\n\f\l\z\2\1\q\d\c\l\5\j\8\2\b\n\y\7\l\a\3\z\2\o\4\c\m\1\d\j\r\u\9\y\8\3\h\1\k\4\q\c\u\1\i\r\4\w\9\e\r\3\u\x\k\v\s\b\3\v\2\h\a\6\f\p\q\4\i\1\d\6\t\y\8\4\f\1\r\h\j\s\q\k\p\5\7\v\4\z\i\v\v\r\w\g\y\s\k\0\x\t\d\7\1\9\i\l\9\2\l\l\y\5\v\l\n\f\e\l\k\1\g\7\t\9\z\z\2\7\f\u\9\m\4\h\x\w\s\1\7\d\7\6\3\g\p\g\r\v\2\m\p\c\k\t\0\f\o\q\k\l\a\s\7\3\7\v\i\w\0\2\g\0\i\w\r\0\o\2\a\z\9\k\5\5\t\0\j\z\g\r\7\a\3\o\1\4\c\0\7\4\k\1\3\q\f\k\4\j\3\j\r\1\3\2\r\u\u\q\m\3\g\m\5\y\k\e\i\f\5\t\a\z\5\y\5\7\b\e\8\1\r\m\u\4\6\1\7\7\u\u\e\p\i\d\y\z\c\3\j\6\w\e\d\0\h\m\9\8\z\j\r\u\l\a\f\m\p\h\5\k\c\f\q\a\8\m\5\f\9\g\q\l\p\r\b\f\k\s\d\v\t\r\m\f\2\1\b\5\y\4\6\h\u\h\t\r\1\v\a\0\f\3\v\g\g\0\a\5\h\x\w\u\q\r\b\n\7\a\z\9\i\z\u\5\i\7\v\c\l\4\4\y\n\q\l\5\s\h\p\z\p\g\d\r\w\x\h\a\7\i\n\3\8\s\q\n\h\u\2\q\f\s\u\n\e\q\n\7\o\y\v\l\n\z\2\a\c\6\p\t\7\s\g\o\c\9\8\t\l\0\x\0\i\y\u\1\i\n\i\g\b\p\n\9\g\5\2\w\j\c\h\t\f\6\d\w\p\y\2\t\x\4\z\k\g\9\d\7\x\4\z\r\9\0\i\s\m\6\z\3\q\u\7\j\z\7\a\s\x\q\j\9\z\g\r\n\e\i\e\w\s\q\6\h\5\2\j\1\d\k\9\e\1\k\5\a\t\a\c\7\3\0\p\5\e\y\t\f\j\8\y\3\e\r\q\q\0\0\1\r\j\0\x\h\f\1\k\w\z\7\i\j\3\w\m\e\1\n\9\t\k\1\7\3\b\e\g\u\4\z\2\7\1\9\g\4\2\g\p\0\o\j\s\w\g\d\g\e\x\u\9\g\4\q\g\x\5\p\5\2\g\a\d\h\j\8\c\6\y\i\l\7\4\d\x\0\p\j\x\a\x\h\c\l\5\v\6\1\k\8\a\w\w\t\u\j\r\o\t\m\b\b\c\v\z\m\r\h\a\r\x\f\7\w\p\5\g\t\t\0\h\f\7\3\h\v\t\q\b\g\e\s\i\x\b\z\q\b\b\h\f\p\9\k\z\d\m\f\0\g\z\8\x\l\p\u\p\y\g\u\7\q\o\x\v\l\b\w\x\v\g\0\j\o\x\m\a\4\8\9\e\5\o\u\f\o\v\k\1\x\k\p\6\k\b\f\m\z\5\z\4\7\4\k\n\j\2\b\8\z\4\x\1\u\z\0\5\5\5\w\i\v\c\q\2\r\4\y\w\r\p\q\l\t\h\8\y\m\p\4\4\f\9\k\d\1\k\9\3\v\4\w\4\b\f\m\9\g\z\0\s\o\n\v\6\l\3\6\5\p\d\1\e\o\z\3\n\v\h\9\8\i\9\g\g\i\2\j\l\y\h\0\7\3\g\7\z\4\i\z\7\3\f\4\p\1\v\v\d\d\9\b\b\z\f\w\v\9\v\f\g\q\t\u\0\j\d\l\f\4\d\l\4\h\e\v\j\s\a\7\j\n\j\9\c\j\6\a\r\k\o\s\i\s\u\u\f\k\p\u\0\g\g\j\5\v\x\n\k\p\9\9\y\m\2\x\c\s\q\r\u\c\b\6\q\6\h\v\4\q\x\1\v\4\g\2\z\1\j\2\i\b\3\w\y\y\2\e\2\l\s\3\9\v\s\m\3\w\k\0\s\u\9\a\m\l\d\l\o\r\5\o\5\h\c\h\o\h\s\z\3\r\5\r\6\t\l\1\5\w\d\b\c\8\g\4\b\j\a\c\u\t\q\a\o\d\n\k\q\y\6\5\o\1\o\0\x\o\r\f\l\2\q\4\0\m\6\l\p\m\w\v\z\f\7\5\h\v\k\k\b\x\k\w\c\r\r\i\s\h\4\b\h\j\9\m\s\q\7\6\s\9\7\r\f\z\5\m\q\l\y\y\1\3\e\4\y\a\d\j\s\7\o\x\x\n\j\e\3\f\2\p\q\4\p\y\s\r\s\4\6\o\7\r\z\5\4\r\1\3\b\f\y\f\0\p\f\6\p\m\k\7\p\8\e\u\2\t\t\v\w\y\9\j\5\z\a\2\g\t\9\e\g\r\w\o\d\3\m\p\j\7\6\5\2\l\0\8\7\j\7\p\5\m\x\9\q\9\5\w\u\2\5\a\1 ]]
00:12:29.875  
00:12:29.875  real	0m1.163s
00:12:29.875  user	0m0.645s
00:12:29.875  sys	0m0.323s
00:12:29.875  ************************************
00:12:29.875  END TEST dd_rw_offset
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:12:29.876  ************************************
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1
00:12:29.876   05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:29.876    05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf
00:12:29.876    05:57:50 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:29.876    05:57:50 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:29.876  {
00:12:29.876    "subsystems": [
00:12:29.876      {
00:12:29.876        "subsystem": "bdev",
00:12:29.876        "config": [
00:12:29.876          {
00:12:29.876            "params": {
00:12:29.876              "trtype": "pcie",
00:12:29.876              "traddr": "0000:00:10.0",
00:12:29.876              "name": "Nvme0"
00:12:29.876            },
00:12:29.876            "method": "bdev_nvme_attach_controller"
00:12:29.876          },
00:12:29.876          {
00:12:29.876            "method": "bdev_wait_for_examine"
00:12:29.876          }
00:12:29.876        ]
00:12:29.876      }
00:12:29.876    ]
00:12:29.876  }
00:12:30.135  [2024-11-18 05:57:50.882135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:30.135  [2024-11-18 05:57:50.882349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84396 ]
00:12:30.135  [2024-11-18 05:57:51.035637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:30.135  [2024-11-18 05:57:51.055847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:30.394  
[2024-11-18T05:57:51.372Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:30.394  
00:12:30.394   05:57:51 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:30.394  
00:12:30.394  real	0m15.961s
00:12:30.394  user	0m9.828s
00:12:30.395  sys	0m3.932s
00:12:30.395   05:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:30.395  ************************************
00:12:30.395  END TEST spdk_dd_basic_rw
00:12:30.395  ************************************
00:12:30.395   05:57:51 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:30.656   05:57:51 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:12:30.656   05:57:51 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:30.656   05:57:51 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:30.656   05:57:51 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:12:30.656  ************************************
00:12:30.656  START TEST spdk_dd_posix
00:12:30.656  ************************************
00:12:30.656   05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:12:30.656  * Looking for test storage...
00:12:30.656  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-:
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-:
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<'
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:30.656      05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:30.656  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.656  		--rc genhtml_branch_coverage=1
00:12:30.656  		--rc genhtml_function_coverage=1
00:12:30.656  		--rc genhtml_legend=1
00:12:30.656  		--rc geninfo_all_blocks=1
00:12:30.656  		--rc geninfo_unexecuted_blocks=1
00:12:30.656  		
00:12:30.656  		'
00:12:30.656     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:30.656  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.657  		--rc genhtml_branch_coverage=1
00:12:30.657  		--rc genhtml_function_coverage=1
00:12:30.657  		--rc genhtml_legend=1
00:12:30.657  		--rc geninfo_all_blocks=1
00:12:30.657  		--rc geninfo_unexecuted_blocks=1
00:12:30.657  		
00:12:30.657  		'
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:30.657  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.657  		--rc genhtml_branch_coverage=1
00:12:30.657  		--rc genhtml_function_coverage=1
00:12:30.657  		--rc genhtml_legend=1
00:12:30.657  		--rc geninfo_all_blocks=1
00:12:30.657  		--rc geninfo_unexecuted_blocks=1
00:12:30.657  		
00:12:30.657  		'
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:30.657  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:30.657  		--rc genhtml_branch_coverage=1
00:12:30.657  		--rc genhtml_function_coverage=1
00:12:30.657  		--rc genhtml_legend=1
00:12:30.657  		--rc geninfo_all_blocks=1
00:12:30.657  		--rc geninfo_unexecuted_blocks=1
00:12:30.657  		
00:12:30.657  		'
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:30.657     05:57:51 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # export PATH
00:12:30.657      05:57:51 spdk_dd.spdk_dd_posix -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO'
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use'
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO'
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use'
00:12:30.657  * First test run, liburing in use
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:30.657  ************************************
00:12:30.657  START TEST dd_flag_append
00:12:30.657  ************************************
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=qk2ep10qcc1n5h9ggyvdp5hc5tkhz4de
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:12:30.657    05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=ph8fqvl1utaw6ocwm2j7i78odvqjl15r
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s qk2ep10qcc1n5h9ggyvdp5hc5tkhz4de
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s ph8fqvl1utaw6ocwm2j7i78odvqjl15r
00:12:30.657   05:57:51 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:12:30.928  [2024-11-18 05:57:51.679094] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:30.928  [2024-11-18 05:57:51.679339] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84463 ]
00:12:30.928  [2024-11-18 05:57:51.826780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:30.929  [2024-11-18 05:57:51.847090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:30.929  
[2024-11-18T05:57:52.165Z] Copying: 32/32 [B] (average 31 kBps)
00:12:31.187  
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ ph8fqvl1utaw6ocwm2j7i78odvqjl15rqk2ep10qcc1n5h9ggyvdp5hc5tkhz4de == \p\h\8\f\q\v\l\1\u\t\a\w\6\o\c\w\m\2\j\7\i\7\8\o\d\v\q\j\l\1\5\r\q\k\2\e\p\1\0\q\c\c\1\n\5\h\9\g\g\y\v\d\p\5\h\c\5\t\k\h\z\4\d\e ]]
00:12:31.187  
00:12:31.187  real	0m0.445s
00:12:31.187  user	0m0.187s
00:12:31.187  sys	0m0.141s
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:31.187  ************************************
00:12:31.187  END TEST dd_flag_append
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:12:31.187  ************************************
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:31.187  ************************************
00:12:31.187  START TEST dd_flag_directory
00:12:31.187  ************************************
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.187    05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.187    05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.187   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:31.188   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:31.446  [2024-11-18 05:57:52.168070] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:31.446  [2024-11-18 05:57:52.168270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84490 ]
00:12:31.446  [2024-11-18 05:57:52.318261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.446  [2024-11-18 05:57:52.337578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.446  [2024-11-18 05:57:52.383063] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:31.446  [2024-11-18 05:57:52.383162] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:31.446  [2024-11-18 05:57:52.383180] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:31.705  [2024-11-18 05:57:52.458086] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.705    05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.705    05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:31.705   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:31.705  [2024-11-18 05:57:52.602102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:31.705  [2024-11-18 05:57:52.602307] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84501 ]
00:12:31.964  [2024-11-18 05:57:52.758220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.964  [2024-11-18 05:57:52.780458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.964  [2024-11-18 05:57:52.831326] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:31.964  [2024-11-18 05:57:52.831413] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:31.964  [2024-11-18 05:57:52.831434] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:31.964  [2024-11-18 05:57:52.906982] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:32.223  
00:12:32.223  real	0m0.875s
00:12:32.223  user	0m0.412s
00:12:32.223  sys	0m0.261s
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:32.223   05:57:52 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x
00:12:32.223  ************************************
00:12:32.223  END TEST dd_flag_directory
00:12:32.223  ************************************
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:32.223  ************************************
00:12:32.223  START TEST dd_flag_nofollow
00:12:32.223  ************************************
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.223    05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.223    05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:32.223   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:32.223  [2024-11-18 05:57:53.106236] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:32.223  [2024-11-18 05:57:53.106473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84530 ]
00:12:32.482  [2024-11-18 05:57:53.261141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:32.482  [2024-11-18 05:57:53.281558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:32.483  [2024-11-18 05:57:53.328191] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:12:32.483  [2024-11-18 05:57:53.328308] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:12:32.483  [2024-11-18 05:57:53.328338] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:32.483  [2024-11-18 05:57:53.402687] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.742    05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.742    05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:32.742   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:32.742  [2024-11-18 05:57:53.539775] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:32.742  [2024-11-18 05:57:53.539984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84541 ]
00:12:32.742  [2024-11-18 05:57:53.692849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:32.742  [2024-11-18 05:57:53.712814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:33.001  [2024-11-18 05:57:53.763694] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:12:33.001  [2024-11-18 05:57:53.763801] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:12:33.001  [2024-11-18 05:57:53.763823] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:33.001  [2024-11-18 05:57:53.842761] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:12:33.001   05:57:53 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:33.260  [2024-11-18 05:57:53.989806] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:33.260  [2024-11-18 05:57:53.990010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84545 ]
00:12:33.260  [2024-11-18 05:57:54.142202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:33.260  [2024-11-18 05:57:54.164161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:33.260  
[2024-11-18T05:57:54.497Z] Copying: 512/512 [B] (average 500 kBps)
00:12:33.519  
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fsbxd17f8ue6e2fbsfmf4n1hyilk6nw7x5j5dngqindbg778wlaxy4gv62j1s0wvr01lptjlbyirwndqtizyaeqyw17ljukyqvar6tlc1jz9qom556pdn1t5370ylyxnwqwg093v56j6ud0fdyawmtfcz39mbpjauij8l18bs1bt6c0q50qaqp53cku6oa5p1t4p58y3niss59h8ejcad3jvg16jn4ppooszntmn67v5hqkltw53wexd8jmsx7b0osms0kbwe03umma1iej72fakah0coil54efg0ukgcgycruv5ks8ocl4trhzueh6x7txqnn57tphqtqgiecplov2bfnz6ad2yrqe8eyl5b10u5sahsrozbtdgcan133yhm4gu5yd2rmd1ip5ffhn5aipofdal4sxtff8fnwdd9r9vcc8d519ppua19zyd3so91xyik31ryw0wyfcai0bwlz2sdev22tuj78rccpy5rgxgylifwghpefks16c9aj3v == \f\s\b\x\d\1\7\f\8\u\e\6\e\2\f\b\s\f\m\f\4\n\1\h\y\i\l\k\6\n\w\7\x\5\j\5\d\n\g\q\i\n\d\b\g\7\7\8\w\l\a\x\y\4\g\v\6\2\j\1\s\0\w\v\r\0\1\l\p\t\j\l\b\y\i\r\w\n\d\q\t\i\z\y\a\e\q\y\w\1\7\l\j\u\k\y\q\v\a\r\6\t\l\c\1\j\z\9\q\o\m\5\5\6\p\d\n\1\t\5\3\7\0\y\l\y\x\n\w\q\w\g\0\9\3\v\5\6\j\6\u\d\0\f\d\y\a\w\m\t\f\c\z\3\9\m\b\p\j\a\u\i\j\8\l\1\8\b\s\1\b\t\6\c\0\q\5\0\q\a\q\p\5\3\c\k\u\6\o\a\5\p\1\t\4\p\5\8\y\3\n\i\s\s\5\9\h\8\e\j\c\a\d\3\j\v\g\1\6\j\n\4\p\p\o\o\s\z\n\t\m\n\6\7\v\5\h\q\k\l\t\w\5\3\w\e\x\d\8\j\m\s\x\7\b\0\o\s\m\s\0\k\b\w\e\0\3\u\m\m\a\1\i\e\j\7\2\f\a\k\a\h\0\c\o\i\l\5\4\e\f\g\0\u\k\g\c\g\y\c\r\u\v\5\k\s\8\o\c\l\4\t\r\h\z\u\e\h\6\x\7\t\x\q\n\n\5\7\t\p\h\q\t\q\g\i\e\c\p\l\o\v\2\b\f\n\z\6\a\d\2\y\r\q\e\8\e\y\l\5\b\1\0\u\5\s\a\h\s\r\o\z\b\t\d\g\c\a\n\1\3\3\y\h\m\4\g\u\5\y\d\2\r\m\d\1\i\p\5\f\f\h\n\5\a\i\p\o\f\d\a\l\4\s\x\t\f\f\8\f\n\w\d\d\9\r\9\v\c\c\8\d\5\1\9\p\p\u\a\1\9\z\y\d\3\s\o\9\1\x\y\i\k\3\1\r\y\w\0\w\y\f\c\a\i\0\b\w\l\z\2\s\d\e\v\2\2\t\u\j\7\8\r\c\c\p\y\5\r\g\x\g\y\l\i\f\w\g\h\p\e\f\k\s\1\6\c\9\a\j\3\v ]]
00:12:33.519  
00:12:33.519  real	0m1.352s
00:12:33.519  user	0m0.609s
00:12:33.519  sys	0m0.423s
00:12:33.519  ************************************
00:12:33.519  END TEST dd_flag_nofollow
00:12:33.519  ************************************
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:33.519  ************************************
00:12:33.519  START TEST dd_flag_noatime
00:12:33.519  ************************************
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:12:33.519    05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731909474
00:12:33.519    05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731909474
00:12:33.519   05:57:54 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1
00:12:34.897   05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:34.897  [2024-11-18 05:57:55.526131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:34.897  [2024-11-18 05:57:55.526368] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84591 ]
00:12:34.897  [2024-11-18 05:57:55.687749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:34.897  [2024-11-18 05:57:55.712787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:34.897  
[2024-11-18T05:57:56.134Z] Copying: 512/512 [B] (average 500 kBps)
00:12:35.156  
00:12:35.156    05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:35.156   05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731909474 ))
00:12:35.156    05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:35.156   05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731909474 ))
00:12:35.156   05:57:55 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:35.156  [2024-11-18 05:57:56.016165] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:35.156  [2024-11-18 05:57:56.016385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84598 ]
00:12:35.415  [2024-11-18 05:57:56.169705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:35.415  [2024-11-18 05:57:56.191749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:35.415  
[2024-11-18T05:57:56.652Z] Copying: 512/512 [B] (average 500 kBps)
00:12:35.674  
00:12:35.674    05:57:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731909476 ))
00:12:35.674  
00:12:35.674  real	0m1.978s
00:12:35.674  user	0m0.440s
00:12:35.674  sys	0m0.306s
00:12:35.674  ************************************
00:12:35.674  END TEST dd_flag_noatime
00:12:35.674  ************************************
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:35.674  ************************************
00:12:35.674  START TEST dd_flags_misc
00:12:35.674  ************************************
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:35.674   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:12:35.674  [2024-11-18 05:57:56.539742] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:35.674  [2024-11-18 05:57:56.539970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84631 ]
00:12:35.933  [2024-11-18 05:57:56.693297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:35.933  [2024-11-18 05:57:56.713336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:35.933  
[2024-11-18T05:57:57.170Z] Copying: 512/512 [B] (average 500 kBps)
00:12:36.192  
00:12:36.192   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ krn62mhn3l40wqslvl67p0r2sb1y4omsmucc87cjkml8dy7t8aypi7j9e5su8uo4g63o2gzdsp4a71w0g4infdafbodj8cl1h37nxn97ref97ith3me4wecrhih9wdaq0g36sii0imqrb8jy5fkshnjq4k0iw400i7n1qk4qvi3fgk8aysciec9yv6gdua9dnd19eujw37iy37ypivovr51lxkb8rtmogn9ry1axg6cjbf5e3dxt8ajbjwettsp5wxsxajngw7lq4gmvlerb5cdbfoeho67jd8azv7juk4ap5nvh7naabju6lyat29o7pjmnb3biew7c5lwjb32rgl6bhmi64rwvizpqia26tual234b5odg16vook5cy11lcnckwc8qs2ea9slasi47zld43yddgtcapsbmif9d6tro1r38209i1mdhwqa2ir3v0flrh3rj9ipapgv2axg9ruf8qni5exi85x58yhdadstc94o73i2qlrrrlw5epwsd == \k\r\n\6\2\m\h\n\3\l\4\0\w\q\s\l\v\l\6\7\p\0\r\2\s\b\1\y\4\o\m\s\m\u\c\c\8\7\c\j\k\m\l\8\d\y\7\t\8\a\y\p\i\7\j\9\e\5\s\u\8\u\o\4\g\6\3\o\2\g\z\d\s\p\4\a\7\1\w\0\g\4\i\n\f\d\a\f\b\o\d\j\8\c\l\1\h\3\7\n\x\n\9\7\r\e\f\9\7\i\t\h\3\m\e\4\w\e\c\r\h\i\h\9\w\d\a\q\0\g\3\6\s\i\i\0\i\m\q\r\b\8\j\y\5\f\k\s\h\n\j\q\4\k\0\i\w\4\0\0\i\7\n\1\q\k\4\q\v\i\3\f\g\k\8\a\y\s\c\i\e\c\9\y\v\6\g\d\u\a\9\d\n\d\1\9\e\u\j\w\3\7\i\y\3\7\y\p\i\v\o\v\r\5\1\l\x\k\b\8\r\t\m\o\g\n\9\r\y\1\a\x\g\6\c\j\b\f\5\e\3\d\x\t\8\a\j\b\j\w\e\t\t\s\p\5\w\x\s\x\a\j\n\g\w\7\l\q\4\g\m\v\l\e\r\b\5\c\d\b\f\o\e\h\o\6\7\j\d\8\a\z\v\7\j\u\k\4\a\p\5\n\v\h\7\n\a\a\b\j\u\6\l\y\a\t\2\9\o\7\p\j\m\n\b\3\b\i\e\w\7\c\5\l\w\j\b\3\2\r\g\l\6\b\h\m\i\6\4\r\w\v\i\z\p\q\i\a\2\6\t\u\a\l\2\3\4\b\5\o\d\g\1\6\v\o\o\k\5\c\y\1\1\l\c\n\c\k\w\c\8\q\s\2\e\a\9\s\l\a\s\i\4\7\z\l\d\4\3\y\d\d\g\t\c\a\p\s\b\m\i\f\9\d\6\t\r\o\1\r\3\8\2\0\9\i\1\m\d\h\w\q\a\2\i\r\3\v\0\f\l\r\h\3\r\j\9\i\p\a\p\g\v\2\a\x\g\9\r\u\f\8\q\n\i\5\e\x\i\8\5\x\5\8\y\h\d\a\d\s\t\c\9\4\o\7\3\i\2\q\l\r\r\r\l\w\5\e\p\w\s\d ]]
00:12:36.192   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:36.192   05:57:56 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:12:36.192  [2024-11-18 05:57:56.995490] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:36.193  [2024-11-18 05:57:56.995700] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84634 ]
00:12:36.193  [2024-11-18 05:57:57.149770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:36.193  [2024-11-18 05:57:57.171294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:36.452  
[2024-11-18T05:57:57.430Z] Copying: 512/512 [B] (average 500 kBps)
00:12:36.452  
00:12:36.452   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ krn62mhn3l40wqslvl67p0r2sb1y4omsmucc87cjkml8dy7t8aypi7j9e5su8uo4g63o2gzdsp4a71w0g4infdafbodj8cl1h37nxn97ref97ith3me4wecrhih9wdaq0g36sii0imqrb8jy5fkshnjq4k0iw400i7n1qk4qvi3fgk8aysciec9yv6gdua9dnd19eujw37iy37ypivovr51lxkb8rtmogn9ry1axg6cjbf5e3dxt8ajbjwettsp5wxsxajngw7lq4gmvlerb5cdbfoeho67jd8azv7juk4ap5nvh7naabju6lyat29o7pjmnb3biew7c5lwjb32rgl6bhmi64rwvizpqia26tual234b5odg16vook5cy11lcnckwc8qs2ea9slasi47zld43yddgtcapsbmif9d6tro1r38209i1mdhwqa2ir3v0flrh3rj9ipapgv2axg9ruf8qni5exi85x58yhdadstc94o73i2qlrrrlw5epwsd == \k\r\n\6\2\m\h\n\3\l\4\0\w\q\s\l\v\l\6\7\p\0\r\2\s\b\1\y\4\o\m\s\m\u\c\c\8\7\c\j\k\m\l\8\d\y\7\t\8\a\y\p\i\7\j\9\e\5\s\u\8\u\o\4\g\6\3\o\2\g\z\d\s\p\4\a\7\1\w\0\g\4\i\n\f\d\a\f\b\o\d\j\8\c\l\1\h\3\7\n\x\n\9\7\r\e\f\9\7\i\t\h\3\m\e\4\w\e\c\r\h\i\h\9\w\d\a\q\0\g\3\6\s\i\i\0\i\m\q\r\b\8\j\y\5\f\k\s\h\n\j\q\4\k\0\i\w\4\0\0\i\7\n\1\q\k\4\q\v\i\3\f\g\k\8\a\y\s\c\i\e\c\9\y\v\6\g\d\u\a\9\d\n\d\1\9\e\u\j\w\3\7\i\y\3\7\y\p\i\v\o\v\r\5\1\l\x\k\b\8\r\t\m\o\g\n\9\r\y\1\a\x\g\6\c\j\b\f\5\e\3\d\x\t\8\a\j\b\j\w\e\t\t\s\p\5\w\x\s\x\a\j\n\g\w\7\l\q\4\g\m\v\l\e\r\b\5\c\d\b\f\o\e\h\o\6\7\j\d\8\a\z\v\7\j\u\k\4\a\p\5\n\v\h\7\n\a\a\b\j\u\6\l\y\a\t\2\9\o\7\p\j\m\n\b\3\b\i\e\w\7\c\5\l\w\j\b\3\2\r\g\l\6\b\h\m\i\6\4\r\w\v\i\z\p\q\i\a\2\6\t\u\a\l\2\3\4\b\5\o\d\g\1\6\v\o\o\k\5\c\y\1\1\l\c\n\c\k\w\c\8\q\s\2\e\a\9\s\l\a\s\i\4\7\z\l\d\4\3\y\d\d\g\t\c\a\p\s\b\m\i\f\9\d\6\t\r\o\1\r\3\8\2\0\9\i\1\m\d\h\w\q\a\2\i\r\3\v\0\f\l\r\h\3\r\j\9\i\p\a\p\g\v\2\a\x\g\9\r\u\f\8\q\n\i\5\e\x\i\8\5\x\5\8\y\h\d\a\d\s\t\c\9\4\o\7\3\i\2\q\l\r\r\r\l\w\5\e\p\w\s\d ]]
00:12:36.452   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:36.452   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:12:36.711  [2024-11-18 05:57:57.463353] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:36.711  [2024-11-18 05:57:57.463570] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84648 ]
00:12:36.711  [2024-11-18 05:57:57.617040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:36.711  [2024-11-18 05:57:57.638030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:36.711  
[2024-11-18T05:57:57.949Z] Copying: 512/512 [B] (average 125 kBps)
00:12:36.971  
00:12:36.971   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ krn62mhn3l40wqslvl67p0r2sb1y4omsmucc87cjkml8dy7t8aypi7j9e5su8uo4g63o2gzdsp4a71w0g4infdafbodj8cl1h37nxn97ref97ith3me4wecrhih9wdaq0g36sii0imqrb8jy5fkshnjq4k0iw400i7n1qk4qvi3fgk8aysciec9yv6gdua9dnd19eujw37iy37ypivovr51lxkb8rtmogn9ry1axg6cjbf5e3dxt8ajbjwettsp5wxsxajngw7lq4gmvlerb5cdbfoeho67jd8azv7juk4ap5nvh7naabju6lyat29o7pjmnb3biew7c5lwjb32rgl6bhmi64rwvizpqia26tual234b5odg16vook5cy11lcnckwc8qs2ea9slasi47zld43yddgtcapsbmif9d6tro1r38209i1mdhwqa2ir3v0flrh3rj9ipapgv2axg9ruf8qni5exi85x58yhdadstc94o73i2qlrrrlw5epwsd == \k\r\n\6\2\m\h\n\3\l\4\0\w\q\s\l\v\l\6\7\p\0\r\2\s\b\1\y\4\o\m\s\m\u\c\c\8\7\c\j\k\m\l\8\d\y\7\t\8\a\y\p\i\7\j\9\e\5\s\u\8\u\o\4\g\6\3\o\2\g\z\d\s\p\4\a\7\1\w\0\g\4\i\n\f\d\a\f\b\o\d\j\8\c\l\1\h\3\7\n\x\n\9\7\r\e\f\9\7\i\t\h\3\m\e\4\w\e\c\r\h\i\h\9\w\d\a\q\0\g\3\6\s\i\i\0\i\m\q\r\b\8\j\y\5\f\k\s\h\n\j\q\4\k\0\i\w\4\0\0\i\7\n\1\q\k\4\q\v\i\3\f\g\k\8\a\y\s\c\i\e\c\9\y\v\6\g\d\u\a\9\d\n\d\1\9\e\u\j\w\3\7\i\y\3\7\y\p\i\v\o\v\r\5\1\l\x\k\b\8\r\t\m\o\g\n\9\r\y\1\a\x\g\6\c\j\b\f\5\e\3\d\x\t\8\a\j\b\j\w\e\t\t\s\p\5\w\x\s\x\a\j\n\g\w\7\l\q\4\g\m\v\l\e\r\b\5\c\d\b\f\o\e\h\o\6\7\j\d\8\a\z\v\7\j\u\k\4\a\p\5\n\v\h\7\n\a\a\b\j\u\6\l\y\a\t\2\9\o\7\p\j\m\n\b\3\b\i\e\w\7\c\5\l\w\j\b\3\2\r\g\l\6\b\h\m\i\6\4\r\w\v\i\z\p\q\i\a\2\6\t\u\a\l\2\3\4\b\5\o\d\g\1\6\v\o\o\k\5\c\y\1\1\l\c\n\c\k\w\c\8\q\s\2\e\a\9\s\l\a\s\i\4\7\z\l\d\4\3\y\d\d\g\t\c\a\p\s\b\m\i\f\9\d\6\t\r\o\1\r\3\8\2\0\9\i\1\m\d\h\w\q\a\2\i\r\3\v\0\f\l\r\h\3\r\j\9\i\p\a\p\g\v\2\a\x\g\9\r\u\f\8\q\n\i\5\e\x\i\8\5\x\5\8\y\h\d\a\d\s\t\c\9\4\o\7\3\i\2\q\l\r\r\r\l\w\5\e\p\w\s\d ]]
00:12:36.971   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:36.971   05:57:57 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:12:36.971  [2024-11-18 05:57:57.925993] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:36.971  [2024-11-18 05:57:57.926197] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84651 ]
00:12:37.230  [2024-11-18 05:57:58.080311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:37.230  [2024-11-18 05:57:58.100332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:37.230  
[2024-11-18T05:57:58.468Z] Copying: 512/512 [B] (average 166 kBps)
00:12:37.490  
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ krn62mhn3l40wqslvl67p0r2sb1y4omsmucc87cjkml8dy7t8aypi7j9e5su8uo4g63o2gzdsp4a71w0g4infdafbodj8cl1h37nxn97ref97ith3me4wecrhih9wdaq0g36sii0imqrb8jy5fkshnjq4k0iw400i7n1qk4qvi3fgk8aysciec9yv6gdua9dnd19eujw37iy37ypivovr51lxkb8rtmogn9ry1axg6cjbf5e3dxt8ajbjwettsp5wxsxajngw7lq4gmvlerb5cdbfoeho67jd8azv7juk4ap5nvh7naabju6lyat29o7pjmnb3biew7c5lwjb32rgl6bhmi64rwvizpqia26tual234b5odg16vook5cy11lcnckwc8qs2ea9slasi47zld43yddgtcapsbmif9d6tro1r38209i1mdhwqa2ir3v0flrh3rj9ipapgv2axg9ruf8qni5exi85x58yhdadstc94o73i2qlrrrlw5epwsd == \k\r\n\6\2\m\h\n\3\l\4\0\w\q\s\l\v\l\6\7\p\0\r\2\s\b\1\y\4\o\m\s\m\u\c\c\8\7\c\j\k\m\l\8\d\y\7\t\8\a\y\p\i\7\j\9\e\5\s\u\8\u\o\4\g\6\3\o\2\g\z\d\s\p\4\a\7\1\w\0\g\4\i\n\f\d\a\f\b\o\d\j\8\c\l\1\h\3\7\n\x\n\9\7\r\e\f\9\7\i\t\h\3\m\e\4\w\e\c\r\h\i\h\9\w\d\a\q\0\g\3\6\s\i\i\0\i\m\q\r\b\8\j\y\5\f\k\s\h\n\j\q\4\k\0\i\w\4\0\0\i\7\n\1\q\k\4\q\v\i\3\f\g\k\8\a\y\s\c\i\e\c\9\y\v\6\g\d\u\a\9\d\n\d\1\9\e\u\j\w\3\7\i\y\3\7\y\p\i\v\o\v\r\5\1\l\x\k\b\8\r\t\m\o\g\n\9\r\y\1\a\x\g\6\c\j\b\f\5\e\3\d\x\t\8\a\j\b\j\w\e\t\t\s\p\5\w\x\s\x\a\j\n\g\w\7\l\q\4\g\m\v\l\e\r\b\5\c\d\b\f\o\e\h\o\6\7\j\d\8\a\z\v\7\j\u\k\4\a\p\5\n\v\h\7\n\a\a\b\j\u\6\l\y\a\t\2\9\o\7\p\j\m\n\b\3\b\i\e\w\7\c\5\l\w\j\b\3\2\r\g\l\6\b\h\m\i\6\4\r\w\v\i\z\p\q\i\a\2\6\t\u\a\l\2\3\4\b\5\o\d\g\1\6\v\o\o\k\5\c\y\1\1\l\c\n\c\k\w\c\8\q\s\2\e\a\9\s\l\a\s\i\4\7\z\l\d\4\3\y\d\d\g\t\c\a\p\s\b\m\i\f\9\d\6\t\r\o\1\r\3\8\2\0\9\i\1\m\d\h\w\q\a\2\i\r\3\v\0\f\l\r\h\3\r\j\9\i\p\a\p\g\v\2\a\x\g\9\r\u\f\8\q\n\i\5\e\x\i\8\5\x\5\8\y\h\d\a\d\s\t\c\9\4\o\7\3\i\2\q\l\r\r\r\l\w\5\e\p\w\s\d ]]
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:37.490   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:12:37.490  [2024-11-18 05:57:58.396023] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:37.490  [2024-11-18 05:57:58.396194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84660 ]
00:12:37.749  [2024-11-18 05:57:58.556399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:37.749  [2024-11-18 05:57:58.582214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:37.749  
[2024-11-18T05:57:58.986Z] Copying: 512/512 [B] (average 500 kBps)
00:12:38.008  
00:12:38.008   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6ab59twry8upfcc3a5woi1a2ckv2s12p73tp79uu3vmzebcyt4a0ydmsgugyfwrlped9kyoeep45istxqxpmafdshfchuwdmgriphyqloku5pi181d3wn6cyebuqelvfgu4s9pf22w5jw87jvlny3j99acwvlaex8htsm027d2d0czovyrinn5q4of8u0y3ke0809thmctqpqowt5avlmv18ndgexz98tjpszb4erap2i4n4vvof5ywyagrmrsab6bwnc267s47fbin247sb838m3l9vkr77kprabt7bmdo9stn2wnjx57lenhxchsizyaf1ggdi0zvdtomun042mge742cswpojy85mtfkp6vktcvjj58ihwfxe9p9gnopww8rtf1tqb2kq3zdqow598ftl073voj4pyuwytj7boyedcyzmqmu0040mk5swz2niocn3o92qll7qj4s88cp72hwkal8grlihp9dqivkqqdvutwejzd10t303o63d2h8h == \6\a\b\5\9\t\w\r\y\8\u\p\f\c\c\3\a\5\w\o\i\1\a\2\c\k\v\2\s\1\2\p\7\3\t\p\7\9\u\u\3\v\m\z\e\b\c\y\t\4\a\0\y\d\m\s\g\u\g\y\f\w\r\l\p\e\d\9\k\y\o\e\e\p\4\5\i\s\t\x\q\x\p\m\a\f\d\s\h\f\c\h\u\w\d\m\g\r\i\p\h\y\q\l\o\k\u\5\p\i\1\8\1\d\3\w\n\6\c\y\e\b\u\q\e\l\v\f\g\u\4\s\9\p\f\2\2\w\5\j\w\8\7\j\v\l\n\y\3\j\9\9\a\c\w\v\l\a\e\x\8\h\t\s\m\0\2\7\d\2\d\0\c\z\o\v\y\r\i\n\n\5\q\4\o\f\8\u\0\y\3\k\e\0\8\0\9\t\h\m\c\t\q\p\q\o\w\t\5\a\v\l\m\v\1\8\n\d\g\e\x\z\9\8\t\j\p\s\z\b\4\e\r\a\p\2\i\4\n\4\v\v\o\f\5\y\w\y\a\g\r\m\r\s\a\b\6\b\w\n\c\2\6\7\s\4\7\f\b\i\n\2\4\7\s\b\8\3\8\m\3\l\9\v\k\r\7\7\k\p\r\a\b\t\7\b\m\d\o\9\s\t\n\2\w\n\j\x\5\7\l\e\n\h\x\c\h\s\i\z\y\a\f\1\g\g\d\i\0\z\v\d\t\o\m\u\n\0\4\2\m\g\e\7\4\2\c\s\w\p\o\j\y\8\5\m\t\f\k\p\6\v\k\t\c\v\j\j\5\8\i\h\w\f\x\e\9\p\9\g\n\o\p\w\w\8\r\t\f\1\t\q\b\2\k\q\3\z\d\q\o\w\5\9\8\f\t\l\0\7\3\v\o\j\4\p\y\u\w\y\t\j\7\b\o\y\e\d\c\y\z\m\q\m\u\0\0\4\0\m\k\5\s\w\z\2\n\i\o\c\n\3\o\9\2\q\l\l\7\q\j\4\s\8\8\c\p\7\2\h\w\k\a\l\8\g\r\l\i\h\p\9\d\q\i\v\k\q\q\d\v\u\t\w\e\j\z\d\1\0\t\3\0\3\o\6\3\d\2\h\8\h ]]
00:12:38.008   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:38.008   05:57:58 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:12:38.008  [2024-11-18 05:57:58.886644] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:38.008  [2024-11-18 05:57:58.886882] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84668 ]
00:12:38.267  [2024-11-18 05:57:59.043593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:38.267  [2024-11-18 05:57:59.068724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:38.267  
[2024-11-18T05:57:59.503Z] Copying: 512/512 [B] (average 500 kBps)
00:12:38.525  
00:12:38.525   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6ab59twry8upfcc3a5woi1a2ckv2s12p73tp79uu3vmzebcyt4a0ydmsgugyfwrlped9kyoeep45istxqxpmafdshfchuwdmgriphyqloku5pi181d3wn6cyebuqelvfgu4s9pf22w5jw87jvlny3j99acwvlaex8htsm027d2d0czovyrinn5q4of8u0y3ke0809thmctqpqowt5avlmv18ndgexz98tjpszb4erap2i4n4vvof5ywyagrmrsab6bwnc267s47fbin247sb838m3l9vkr77kprabt7bmdo9stn2wnjx57lenhxchsizyaf1ggdi0zvdtomun042mge742cswpojy85mtfkp6vktcvjj58ihwfxe9p9gnopww8rtf1tqb2kq3zdqow598ftl073voj4pyuwytj7boyedcyzmqmu0040mk5swz2niocn3o92qll7qj4s88cp72hwkal8grlihp9dqivkqqdvutwejzd10t303o63d2h8h == \6\a\b\5\9\t\w\r\y\8\u\p\f\c\c\3\a\5\w\o\i\1\a\2\c\k\v\2\s\1\2\p\7\3\t\p\7\9\u\u\3\v\m\z\e\b\c\y\t\4\a\0\y\d\m\s\g\u\g\y\f\w\r\l\p\e\d\9\k\y\o\e\e\p\4\5\i\s\t\x\q\x\p\m\a\f\d\s\h\f\c\h\u\w\d\m\g\r\i\p\h\y\q\l\o\k\u\5\p\i\1\8\1\d\3\w\n\6\c\y\e\b\u\q\e\l\v\f\g\u\4\s\9\p\f\2\2\w\5\j\w\8\7\j\v\l\n\y\3\j\9\9\a\c\w\v\l\a\e\x\8\h\t\s\m\0\2\7\d\2\d\0\c\z\o\v\y\r\i\n\n\5\q\4\o\f\8\u\0\y\3\k\e\0\8\0\9\t\h\m\c\t\q\p\q\o\w\t\5\a\v\l\m\v\1\8\n\d\g\e\x\z\9\8\t\j\p\s\z\b\4\e\r\a\p\2\i\4\n\4\v\v\o\f\5\y\w\y\a\g\r\m\r\s\a\b\6\b\w\n\c\2\6\7\s\4\7\f\b\i\n\2\4\7\s\b\8\3\8\m\3\l\9\v\k\r\7\7\k\p\r\a\b\t\7\b\m\d\o\9\s\t\n\2\w\n\j\x\5\7\l\e\n\h\x\c\h\s\i\z\y\a\f\1\g\g\d\i\0\z\v\d\t\o\m\u\n\0\4\2\m\g\e\7\4\2\c\s\w\p\o\j\y\8\5\m\t\f\k\p\6\v\k\t\c\v\j\j\5\8\i\h\w\f\x\e\9\p\9\g\n\o\p\w\w\8\r\t\f\1\t\q\b\2\k\q\3\z\d\q\o\w\5\9\8\f\t\l\0\7\3\v\o\j\4\p\y\u\w\y\t\j\7\b\o\y\e\d\c\y\z\m\q\m\u\0\0\4\0\m\k\5\s\w\z\2\n\i\o\c\n\3\o\9\2\q\l\l\7\q\j\4\s\8\8\c\p\7\2\h\w\k\a\l\8\g\r\l\i\h\p\9\d\q\i\v\k\q\q\d\v\u\t\w\e\j\z\d\1\0\t\3\0\3\o\6\3\d\2\h\8\h ]]
00:12:38.525   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:38.525   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:12:38.525  [2024-11-18 05:57:59.365800] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:38.525  [2024-11-18 05:57:59.366027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84676 ]
00:12:38.782  [2024-11-18 05:57:59.520664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:38.782  [2024-11-18 05:57:59.541446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:38.782  
[2024-11-18T05:57:59.760Z] Copying: 512/512 [B] (average 125 kBps)
00:12:38.782  
00:12:39.041   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6ab59twry8upfcc3a5woi1a2ckv2s12p73tp79uu3vmzebcyt4a0ydmsgugyfwrlped9kyoeep45istxqxpmafdshfchuwdmgriphyqloku5pi181d3wn6cyebuqelvfgu4s9pf22w5jw87jvlny3j99acwvlaex8htsm027d2d0czovyrinn5q4of8u0y3ke0809thmctqpqowt5avlmv18ndgexz98tjpszb4erap2i4n4vvof5ywyagrmrsab6bwnc267s47fbin247sb838m3l9vkr77kprabt7bmdo9stn2wnjx57lenhxchsizyaf1ggdi0zvdtomun042mge742cswpojy85mtfkp6vktcvjj58ihwfxe9p9gnopww8rtf1tqb2kq3zdqow598ftl073voj4pyuwytj7boyedcyzmqmu0040mk5swz2niocn3o92qll7qj4s88cp72hwkal8grlihp9dqivkqqdvutwejzd10t303o63d2h8h == \6\a\b\5\9\t\w\r\y\8\u\p\f\c\c\3\a\5\w\o\i\1\a\2\c\k\v\2\s\1\2\p\7\3\t\p\7\9\u\u\3\v\m\z\e\b\c\y\t\4\a\0\y\d\m\s\g\u\g\y\f\w\r\l\p\e\d\9\k\y\o\e\e\p\4\5\i\s\t\x\q\x\p\m\a\f\d\s\h\f\c\h\u\w\d\m\g\r\i\p\h\y\q\l\o\k\u\5\p\i\1\8\1\d\3\w\n\6\c\y\e\b\u\q\e\l\v\f\g\u\4\s\9\p\f\2\2\w\5\j\w\8\7\j\v\l\n\y\3\j\9\9\a\c\w\v\l\a\e\x\8\h\t\s\m\0\2\7\d\2\d\0\c\z\o\v\y\r\i\n\n\5\q\4\o\f\8\u\0\y\3\k\e\0\8\0\9\t\h\m\c\t\q\p\q\o\w\t\5\a\v\l\m\v\1\8\n\d\g\e\x\z\9\8\t\j\p\s\z\b\4\e\r\a\p\2\i\4\n\4\v\v\o\f\5\y\w\y\a\g\r\m\r\s\a\b\6\b\w\n\c\2\6\7\s\4\7\f\b\i\n\2\4\7\s\b\8\3\8\m\3\l\9\v\k\r\7\7\k\p\r\a\b\t\7\b\m\d\o\9\s\t\n\2\w\n\j\x\5\7\l\e\n\h\x\c\h\s\i\z\y\a\f\1\g\g\d\i\0\z\v\d\t\o\m\u\n\0\4\2\m\g\e\7\4\2\c\s\w\p\o\j\y\8\5\m\t\f\k\p\6\v\k\t\c\v\j\j\5\8\i\h\w\f\x\e\9\p\9\g\n\o\p\w\w\8\r\t\f\1\t\q\b\2\k\q\3\z\d\q\o\w\5\9\8\f\t\l\0\7\3\v\o\j\4\p\y\u\w\y\t\j\7\b\o\y\e\d\c\y\z\m\q\m\u\0\0\4\0\m\k\5\s\w\z\2\n\i\o\c\n\3\o\9\2\q\l\l\7\q\j\4\s\8\8\c\p\7\2\h\w\k\a\l\8\g\r\l\i\h\p\9\d\q\i\v\k\q\q\d\v\u\t\w\e\j\z\d\1\0\t\3\0\3\o\6\3\d\2\h\8\h ]]
00:12:39.041   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:39.041   05:57:59 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:12:39.041  [2024-11-18 05:57:59.828164] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:39.041  [2024-11-18 05:57:59.828377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84685 ]
00:12:39.041  [2024-11-18 05:57:59.984467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:39.041  [2024-11-18 05:58:00.005258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:39.300  
[2024-11-18T05:58:00.278Z] Copying: 512/512 [B] (average 125 kBps)
00:12:39.300  
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 6ab59twry8upfcc3a5woi1a2ckv2s12p73tp79uu3vmzebcyt4a0ydmsgugyfwrlped9kyoeep45istxqxpmafdshfchuwdmgriphyqloku5pi181d3wn6cyebuqelvfgu4s9pf22w5jw87jvlny3j99acwvlaex8htsm027d2d0czovyrinn5q4of8u0y3ke0809thmctqpqowt5avlmv18ndgexz98tjpszb4erap2i4n4vvof5ywyagrmrsab6bwnc267s47fbin247sb838m3l9vkr77kprabt7bmdo9stn2wnjx57lenhxchsizyaf1ggdi0zvdtomun042mge742cswpojy85mtfkp6vktcvjj58ihwfxe9p9gnopww8rtf1tqb2kq3zdqow598ftl073voj4pyuwytj7boyedcyzmqmu0040mk5swz2niocn3o92qll7qj4s88cp72hwkal8grlihp9dqivkqqdvutwejzd10t303o63d2h8h == \6\a\b\5\9\t\w\r\y\8\u\p\f\c\c\3\a\5\w\o\i\1\a\2\c\k\v\2\s\1\2\p\7\3\t\p\7\9\u\u\3\v\m\z\e\b\c\y\t\4\a\0\y\d\m\s\g\u\g\y\f\w\r\l\p\e\d\9\k\y\o\e\e\p\4\5\i\s\t\x\q\x\p\m\a\f\d\s\h\f\c\h\u\w\d\m\g\r\i\p\h\y\q\l\o\k\u\5\p\i\1\8\1\d\3\w\n\6\c\y\e\b\u\q\e\l\v\f\g\u\4\s\9\p\f\2\2\w\5\j\w\8\7\j\v\l\n\y\3\j\9\9\a\c\w\v\l\a\e\x\8\h\t\s\m\0\2\7\d\2\d\0\c\z\o\v\y\r\i\n\n\5\q\4\o\f\8\u\0\y\3\k\e\0\8\0\9\t\h\m\c\t\q\p\q\o\w\t\5\a\v\l\m\v\1\8\n\d\g\e\x\z\9\8\t\j\p\s\z\b\4\e\r\a\p\2\i\4\n\4\v\v\o\f\5\y\w\y\a\g\r\m\r\s\a\b\6\b\w\n\c\2\6\7\s\4\7\f\b\i\n\2\4\7\s\b\8\3\8\m\3\l\9\v\k\r\7\7\k\p\r\a\b\t\7\b\m\d\o\9\s\t\n\2\w\n\j\x\5\7\l\e\n\h\x\c\h\s\i\z\y\a\f\1\g\g\d\i\0\z\v\d\t\o\m\u\n\0\4\2\m\g\e\7\4\2\c\s\w\p\o\j\y\8\5\m\t\f\k\p\6\v\k\t\c\v\j\j\5\8\i\h\w\f\x\e\9\p\9\g\n\o\p\w\w\8\r\t\f\1\t\q\b\2\k\q\3\z\d\q\o\w\5\9\8\f\t\l\0\7\3\v\o\j\4\p\y\u\w\y\t\j\7\b\o\y\e\d\c\y\z\m\q\m\u\0\0\4\0\m\k\5\s\w\z\2\n\i\o\c\n\3\o\9\2\q\l\l\7\q\j\4\s\8\8\c\p\7\2\h\w\k\a\l\8\g\r\l\i\h\p\9\d\q\i\v\k\q\q\d\v\u\t\w\e\j\z\d\1\0\t\3\0\3\o\6\3\d\2\h\8\h ]]
00:12:39.300  
00:12:39.300  real	0m3.751s
00:12:39.300  user	0m1.700s
00:12:39.300  sys	0m1.086s
00:12:39.300  ************************************
00:12:39.300  END TEST dd_flags_misc
00:12:39.300  ************************************
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO'
00:12:39.300  * Second test run, disabling liburing, forcing AIO
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio")
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:39.300   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:39.559  ************************************
00:12:39.559  START TEST dd_flag_append_forced_aio
00:12:39.559  ************************************
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=buz7w0c418ui6d88osa3bizhrcup4dx8
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:39.559    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=e91en61wphdtlwsonf77m4o170deymr3
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s buz7w0c418ui6d88osa3bizhrcup4dx8
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s e91en61wphdtlwsonf77m4o170deymr3
00:12:39.559   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:12:39.559  [2024-11-18 05:58:00.348666] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:39.559  [2024-11-18 05:58:00.348870] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84707 ]
00:12:39.559  [2024-11-18 05:58:00.502673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:39.559  [2024-11-18 05:58:00.523116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:39.818  
[2024-11-18T05:58:00.796Z] Copying: 32/32 [B] (average 31 kBps)
00:12:39.818  
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ e91en61wphdtlwsonf77m4o170deymr3buz7w0c418ui6d88osa3bizhrcup4dx8 == \e\9\1\e\n\6\1\w\p\h\d\t\l\w\s\o\n\f\7\7\m\4\o\1\7\0\d\e\y\m\r\3\b\u\z\7\w\0\c\4\1\8\u\i\6\d\8\8\o\s\a\3\b\i\z\h\r\c\u\p\4\d\x\8 ]]
00:12:39.818  
00:12:39.818  real	0m0.471s
00:12:39.818  user	0m0.220s
00:12:39.818  sys	0m0.134s
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:39.818  ************************************
00:12:39.818  END TEST dd_flag_append_forced_aio
00:12:39.818  ************************************
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:39.818   05:58:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:40.077  ************************************
00:12:40.077  START TEST dd_flag_directory_forced_aio
00:12:40.077  ************************************
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.077    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.077    05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:40.077   05:58:00 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:40.077  [2024-11-18 05:58:00.865231] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:40.077  [2024-11-18 05:58:00.865440] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84740 ]
00:12:40.077  [2024-11-18 05:58:01.016846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.077  [2024-11-18 05:58:01.036772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:40.336  [2024-11-18 05:58:01.082955] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:40.336  [2024-11-18 05:58:01.083057] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:40.336  [2024-11-18 05:58:01.083085] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:40.336  [2024-11-18 05:58:01.154592] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.336    05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.336    05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:40.336   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:12:40.336  [2024-11-18 05:58:01.295152] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:40.336  [2024-11-18 05:58:01.295396] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84745 ]
00:12:40.595  [2024-11-18 05:58:01.454786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:40.595  [2024-11-18 05:58:01.475226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:40.595  [2024-11-18 05:58:01.520756] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:40.595  [2024-11-18 05:58:01.520871] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:12:40.595  [2024-11-18 05:58:01.520894] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:40.855  [2024-11-18 05:58:01.597183] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:40.855  
00:12:40.855  real	0m0.878s
00:12:40.855  user	0m0.404s
00:12:40.855  sys	0m0.272s
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:40.855  ************************************
00:12:40.855  END TEST dd_flag_directory_forced_aio
00:12:40.855  ************************************
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:40.855  ************************************
00:12:40.855  START TEST dd_flag_nofollow_forced_aio
00:12:40.855  ************************************
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.855    05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.855    05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:40.855   05:58:01 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:40.855  [2024-11-18 05:58:01.788555] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:40.855  [2024-11-18 05:58:01.788714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84779 ]
00:12:41.114  [2024-11-18 05:58:01.936368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:41.114  [2024-11-18 05:58:01.956855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:41.114  [2024-11-18 05:58:02.006328] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:12:41.114  [2024-11-18 05:58:02.006451] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:12:41.114  [2024-11-18 05:58:02.006482] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:41.114  [2024-11-18 05:58:02.078047] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:41.374    05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:41.374    05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:41.374   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:12:41.374  [2024-11-18 05:58:02.223661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:41.374  [2024-11-18 05:58:02.223896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84785 ]
00:12:41.633  [2024-11-18 05:58:02.376868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:41.633  [2024-11-18 05:58:02.400036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:41.633  [2024-11-18 05:58:02.446428] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:12:41.633  [2024-11-18 05:58:02.446500] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:12:41.633  [2024-11-18 05:58:02.446538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:41.633  [2024-11-18 05:58:02.518392] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:41.633   05:58:02 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:41.892  [2024-11-18 05:58:02.654163] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:41.892  [2024-11-18 05:58:02.654395] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84794 ]
00:12:41.892  [2024-11-18 05:58:02.804151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:41.892  [2024-11-18 05:58:02.825462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:42.150  
[2024-11-18T05:58:03.128Z] Copying: 512/512 [B] (average 500 kBps)
00:12:42.150  
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ mlnjnyfueo42wu9tgsnuy2ecs7204xhiyxid1f47vyxl8yknu9k9se53tg4h6s2qvp6hr4ylynws8fxjp3937h129jfks1iwbhwgwlxhfz2bnay3fygzj7vgosf5ljkxw5a0eybc8nburcme604xdwyuee687uwejiuc7e79hg6gowy9sugmscogtvf7ujd741g97ws0tzrnf9mf3g2g0glokodl3k17pcc337xxzjpex3q5wox62br8axg8la16qr46ykycxf5czu98m44jexezpmd4f5h93xuwzt8rtt7oadk1n2obwwp51eigwphtu3om6a6gueoy1uj9jmong3k1mzb70xy5oi6maqyuwv3o2jkir5s4jp1n23raxasr69e05sb8cpkuuxum9yvz6unmd5pkfvh4lvs7acnqkpvhecc3xsbuk985qruz412owkuk12rmix07b1ez2u46yf1e9l0gcqmmlyvbhfnwfqm0a80wmdry2st53q6en14j == \m\l\n\j\n\y\f\u\e\o\4\2\w\u\9\t\g\s\n\u\y\2\e\c\s\7\2\0\4\x\h\i\y\x\i\d\1\f\4\7\v\y\x\l\8\y\k\n\u\9\k\9\s\e\5\3\t\g\4\h\6\s\2\q\v\p\6\h\r\4\y\l\y\n\w\s\8\f\x\j\p\3\9\3\7\h\1\2\9\j\f\k\s\1\i\w\b\h\w\g\w\l\x\h\f\z\2\b\n\a\y\3\f\y\g\z\j\7\v\g\o\s\f\5\l\j\k\x\w\5\a\0\e\y\b\c\8\n\b\u\r\c\m\e\6\0\4\x\d\w\y\u\e\e\6\8\7\u\w\e\j\i\u\c\7\e\7\9\h\g\6\g\o\w\y\9\s\u\g\m\s\c\o\g\t\v\f\7\u\j\d\7\4\1\g\9\7\w\s\0\t\z\r\n\f\9\m\f\3\g\2\g\0\g\l\o\k\o\d\l\3\k\1\7\p\c\c\3\3\7\x\x\z\j\p\e\x\3\q\5\w\o\x\6\2\b\r\8\a\x\g\8\l\a\1\6\q\r\4\6\y\k\y\c\x\f\5\c\z\u\9\8\m\4\4\j\e\x\e\z\p\m\d\4\f\5\h\9\3\x\u\w\z\t\8\r\t\t\7\o\a\d\k\1\n\2\o\b\w\w\p\5\1\e\i\g\w\p\h\t\u\3\o\m\6\a\6\g\u\e\o\y\1\u\j\9\j\m\o\n\g\3\k\1\m\z\b\7\0\x\y\5\o\i\6\m\a\q\y\u\w\v\3\o\2\j\k\i\r\5\s\4\j\p\1\n\2\3\r\a\x\a\s\r\6\9\e\0\5\s\b\8\c\p\k\u\u\x\u\m\9\y\v\z\6\u\n\m\d\5\p\k\f\v\h\4\l\v\s\7\a\c\n\q\k\p\v\h\e\c\c\3\x\s\b\u\k\9\8\5\q\r\u\z\4\1\2\o\w\k\u\k\1\2\r\m\i\x\0\7\b\1\e\z\2\u\4\6\y\f\1\e\9\l\0\g\c\q\m\m\l\y\v\b\h\f\n\w\f\q\m\0\a\8\0\w\m\d\r\y\2\s\t\5\3\q\6\e\n\1\4\j ]]
00:12:42.150  
00:12:42.150  real	0m1.326s
00:12:42.150  user	0m0.615s
00:12:42.150  sys	0m0.389s
00:12:42.150  ************************************
00:12:42.150  END TEST dd_flag_nofollow_forced_aio
00:12:42.150  ************************************
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:42.150  ************************************
00:12:42.150  START TEST dd_flag_noatime_forced_aio
00:12:42.150  ************************************
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:42.150    05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731909482
00:12:42.150    05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731909483
00:12:42.150   05:58:03 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1
00:12:43.525   05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:43.526  [2024-11-18 05:58:04.187451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:43.526  [2024-11-18 05:58:04.187685] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84835 ]
00:12:43.526  [2024-11-18 05:58:04.348959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:43.526  [2024-11-18 05:58:04.373474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:43.526  
[2024-11-18T05:58:04.771Z] Copying: 512/512 [B] (average 500 kBps)
00:12:43.793  
00:12:43.793    05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:43.793   05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731909482 ))
00:12:43.793    05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:43.793   05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731909483 ))
00:12:43.793   05:58:04 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:43.793  [2024-11-18 05:58:04.674861] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:43.793  [2024-11-18 05:58:04.675063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84848 ]
00:12:44.065  [2024-11-18 05:58:04.828542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:44.065  [2024-11-18 05:58:04.848793] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:44.065  
[2024-11-18T05:58:05.302Z] Copying: 512/512 [B] (average 500 kBps)
00:12:44.324  
00:12:44.324    05:58:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731909484 ))
00:12:44.324  
00:12:44.324  real	0m1.973s
00:12:44.324  user	0m0.433s
00:12:44.324  sys	0m0.307s
00:12:44.324  ************************************
00:12:44.324  END TEST dd_flag_noatime_forced_aio
00:12:44.324  ************************************
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:44.324  ************************************
00:12:44.324  START TEST dd_flags_misc_forced_aio
00:12:44.324  ************************************
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:44.324   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:12:44.324  [2024-11-18 05:58:05.192662] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:44.324  [2024-11-18 05:58:05.192899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84874 ]
00:12:44.582  [2024-11-18 05:58:05.343788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:44.582  [2024-11-18 05:58:05.363691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:44.582  
[2024-11-18T05:58:05.818Z] Copying: 512/512 [B] (average 500 kBps)
00:12:44.840  
00:12:44.840   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8mbq0qgpal8l5u7tmoqfn3ojubxphs8yf3l3ygnqpjr15qkeghh2ecq1g8tf2y4rzcuzbpx85wy1r3kv121icqmlquy803eatm31qhu5vax0gz5fr23hp9j06wf670ymgp206kvet0s5vqymbtg8wg1vnt7ykxl53oiudfpgq4bkfp650jb3c63n6tw03o5lo101i8r4rybx02jafn8javmt6jxpqvknyjbq7yb3vr359hieid4ok7xc98sk1daqooku51pm26xgv0z4w20o10bv2nv1j5lgmyqqecpngd08u9r1enl6atpv0u812cisifzzjsxejcwpn0u6nco5qfpdsu22hu82bjmeao59lm35qdnkw7v4moh6mzprllnxb3euqxz0g70y2rnaikg5d335cv4o76pnvrpoixoa1p13g5g71n1dkzijinyed1sfu8nju548843ocblhd87d6gkzg8o66ljkg6oc6yg4krvjzs2c3rpbln0s4w04l34n == \8\m\b\q\0\q\g\p\a\l\8\l\5\u\7\t\m\o\q\f\n\3\o\j\u\b\x\p\h\s\8\y\f\3\l\3\y\g\n\q\p\j\r\1\5\q\k\e\g\h\h\2\e\c\q\1\g\8\t\f\2\y\4\r\z\c\u\z\b\p\x\8\5\w\y\1\r\3\k\v\1\2\1\i\c\q\m\l\q\u\y\8\0\3\e\a\t\m\3\1\q\h\u\5\v\a\x\0\g\z\5\f\r\2\3\h\p\9\j\0\6\w\f\6\7\0\y\m\g\p\2\0\6\k\v\e\t\0\s\5\v\q\y\m\b\t\g\8\w\g\1\v\n\t\7\y\k\x\l\5\3\o\i\u\d\f\p\g\q\4\b\k\f\p\6\5\0\j\b\3\c\6\3\n\6\t\w\0\3\o\5\l\o\1\0\1\i\8\r\4\r\y\b\x\0\2\j\a\f\n\8\j\a\v\m\t\6\j\x\p\q\v\k\n\y\j\b\q\7\y\b\3\v\r\3\5\9\h\i\e\i\d\4\o\k\7\x\c\9\8\s\k\1\d\a\q\o\o\k\u\5\1\p\m\2\6\x\g\v\0\z\4\w\2\0\o\1\0\b\v\2\n\v\1\j\5\l\g\m\y\q\q\e\c\p\n\g\d\0\8\u\9\r\1\e\n\l\6\a\t\p\v\0\u\8\1\2\c\i\s\i\f\z\z\j\s\x\e\j\c\w\p\n\0\u\6\n\c\o\5\q\f\p\d\s\u\2\2\h\u\8\2\b\j\m\e\a\o\5\9\l\m\3\5\q\d\n\k\w\7\v\4\m\o\h\6\m\z\p\r\l\l\n\x\b\3\e\u\q\x\z\0\g\7\0\y\2\r\n\a\i\k\g\5\d\3\3\5\c\v\4\o\7\6\p\n\v\r\p\o\i\x\o\a\1\p\1\3\g\5\g\7\1\n\1\d\k\z\i\j\i\n\y\e\d\1\s\f\u\8\n\j\u\5\4\8\8\4\3\o\c\b\l\h\d\8\7\d\6\g\k\z\g\8\o\6\6\l\j\k\g\6\o\c\6\y\g\4\k\r\v\j\z\s\2\c\3\r\p\b\l\n\0\s\4\w\0\4\l\3\4\n ]]
00:12:44.840   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:44.840   05:58:05 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:12:44.840  [2024-11-18 05:58:05.647552] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:44.840  [2024-11-18 05:58:05.647795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84884 ]
00:12:44.840  [2024-11-18 05:58:05.800325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:45.099  [2024-11-18 05:58:05.821508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:45.099  
[2024-11-18T05:58:06.077Z] Copying: 512/512 [B] (average 500 kBps)
00:12:45.099  
00:12:45.099   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8mbq0qgpal8l5u7tmoqfn3ojubxphs8yf3l3ygnqpjr15qkeghh2ecq1g8tf2y4rzcuzbpx85wy1r3kv121icqmlquy803eatm31qhu5vax0gz5fr23hp9j06wf670ymgp206kvet0s5vqymbtg8wg1vnt7ykxl53oiudfpgq4bkfp650jb3c63n6tw03o5lo101i8r4rybx02jafn8javmt6jxpqvknyjbq7yb3vr359hieid4ok7xc98sk1daqooku51pm26xgv0z4w20o10bv2nv1j5lgmyqqecpngd08u9r1enl6atpv0u812cisifzzjsxejcwpn0u6nco5qfpdsu22hu82bjmeao59lm35qdnkw7v4moh6mzprllnxb3euqxz0g70y2rnaikg5d335cv4o76pnvrpoixoa1p13g5g71n1dkzijinyed1sfu8nju548843ocblhd87d6gkzg8o66ljkg6oc6yg4krvjzs2c3rpbln0s4w04l34n == \8\m\b\q\0\q\g\p\a\l\8\l\5\u\7\t\m\o\q\f\n\3\o\j\u\b\x\p\h\s\8\y\f\3\l\3\y\g\n\q\p\j\r\1\5\q\k\e\g\h\h\2\e\c\q\1\g\8\t\f\2\y\4\r\z\c\u\z\b\p\x\8\5\w\y\1\r\3\k\v\1\2\1\i\c\q\m\l\q\u\y\8\0\3\e\a\t\m\3\1\q\h\u\5\v\a\x\0\g\z\5\f\r\2\3\h\p\9\j\0\6\w\f\6\7\0\y\m\g\p\2\0\6\k\v\e\t\0\s\5\v\q\y\m\b\t\g\8\w\g\1\v\n\t\7\y\k\x\l\5\3\o\i\u\d\f\p\g\q\4\b\k\f\p\6\5\0\j\b\3\c\6\3\n\6\t\w\0\3\o\5\l\o\1\0\1\i\8\r\4\r\y\b\x\0\2\j\a\f\n\8\j\a\v\m\t\6\j\x\p\q\v\k\n\y\j\b\q\7\y\b\3\v\r\3\5\9\h\i\e\i\d\4\o\k\7\x\c\9\8\s\k\1\d\a\q\o\o\k\u\5\1\p\m\2\6\x\g\v\0\z\4\w\2\0\o\1\0\b\v\2\n\v\1\j\5\l\g\m\y\q\q\e\c\p\n\g\d\0\8\u\9\r\1\e\n\l\6\a\t\p\v\0\u\8\1\2\c\i\s\i\f\z\z\j\s\x\e\j\c\w\p\n\0\u\6\n\c\o\5\q\f\p\d\s\u\2\2\h\u\8\2\b\j\m\e\a\o\5\9\l\m\3\5\q\d\n\k\w\7\v\4\m\o\h\6\m\z\p\r\l\l\n\x\b\3\e\u\q\x\z\0\g\7\0\y\2\r\n\a\i\k\g\5\d\3\3\5\c\v\4\o\7\6\p\n\v\r\p\o\i\x\o\a\1\p\1\3\g\5\g\7\1\n\1\d\k\z\i\j\i\n\y\e\d\1\s\f\u\8\n\j\u\5\4\8\8\4\3\o\c\b\l\h\d\8\7\d\6\g\k\z\g\8\o\6\6\l\j\k\g\6\o\c\6\y\g\4\k\r\v\j\z\s\2\c\3\r\p\b\l\n\0\s\4\w\0\4\l\3\4\n ]]
00:12:45.099   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:45.099   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:12:45.358  [2024-11-18 05:58:06.110460] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:45.358  [2024-11-18 05:58:06.110643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84887 ]
00:12:45.358  [2024-11-18 05:58:06.265829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:45.358  [2024-11-18 05:58:06.285623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:45.358  
[2024-11-18T05:58:06.594Z] Copying: 512/512 [B] (average 166 kBps)
00:12:45.616  
00:12:45.616   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8mbq0qgpal8l5u7tmoqfn3ojubxphs8yf3l3ygnqpjr15qkeghh2ecq1g8tf2y4rzcuzbpx85wy1r3kv121icqmlquy803eatm31qhu5vax0gz5fr23hp9j06wf670ymgp206kvet0s5vqymbtg8wg1vnt7ykxl53oiudfpgq4bkfp650jb3c63n6tw03o5lo101i8r4rybx02jafn8javmt6jxpqvknyjbq7yb3vr359hieid4ok7xc98sk1daqooku51pm26xgv0z4w20o10bv2nv1j5lgmyqqecpngd08u9r1enl6atpv0u812cisifzzjsxejcwpn0u6nco5qfpdsu22hu82bjmeao59lm35qdnkw7v4moh6mzprllnxb3euqxz0g70y2rnaikg5d335cv4o76pnvrpoixoa1p13g5g71n1dkzijinyed1sfu8nju548843ocblhd87d6gkzg8o66ljkg6oc6yg4krvjzs2c3rpbln0s4w04l34n == \8\m\b\q\0\q\g\p\a\l\8\l\5\u\7\t\m\o\q\f\n\3\o\j\u\b\x\p\h\s\8\y\f\3\l\3\y\g\n\q\p\j\r\1\5\q\k\e\g\h\h\2\e\c\q\1\g\8\t\f\2\y\4\r\z\c\u\z\b\p\x\8\5\w\y\1\r\3\k\v\1\2\1\i\c\q\m\l\q\u\y\8\0\3\e\a\t\m\3\1\q\h\u\5\v\a\x\0\g\z\5\f\r\2\3\h\p\9\j\0\6\w\f\6\7\0\y\m\g\p\2\0\6\k\v\e\t\0\s\5\v\q\y\m\b\t\g\8\w\g\1\v\n\t\7\y\k\x\l\5\3\o\i\u\d\f\p\g\q\4\b\k\f\p\6\5\0\j\b\3\c\6\3\n\6\t\w\0\3\o\5\l\o\1\0\1\i\8\r\4\r\y\b\x\0\2\j\a\f\n\8\j\a\v\m\t\6\j\x\p\q\v\k\n\y\j\b\q\7\y\b\3\v\r\3\5\9\h\i\e\i\d\4\o\k\7\x\c\9\8\s\k\1\d\a\q\o\o\k\u\5\1\p\m\2\6\x\g\v\0\z\4\w\2\0\o\1\0\b\v\2\n\v\1\j\5\l\g\m\y\q\q\e\c\p\n\g\d\0\8\u\9\r\1\e\n\l\6\a\t\p\v\0\u\8\1\2\c\i\s\i\f\z\z\j\s\x\e\j\c\w\p\n\0\u\6\n\c\o\5\q\f\p\d\s\u\2\2\h\u\8\2\b\j\m\e\a\o\5\9\l\m\3\5\q\d\n\k\w\7\v\4\m\o\h\6\m\z\p\r\l\l\n\x\b\3\e\u\q\x\z\0\g\7\0\y\2\r\n\a\i\k\g\5\d\3\3\5\c\v\4\o\7\6\p\n\v\r\p\o\i\x\o\a\1\p\1\3\g\5\g\7\1\n\1\d\k\z\i\j\i\n\y\e\d\1\s\f\u\8\n\j\u\5\4\8\8\4\3\o\c\b\l\h\d\8\7\d\6\g\k\z\g\8\o\6\6\l\j\k\g\6\o\c\6\y\g\4\k\r\v\j\z\s\2\c\3\r\p\b\l\n\0\s\4\w\0\4\l\3\4\n ]]
00:12:45.616   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:45.616   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:12:45.616  [2024-11-18 05:58:06.553508] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:45.616  [2024-11-18 05:58:06.553702] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84901 ]
00:12:45.874  [2024-11-18 05:58:06.705423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:45.874  [2024-11-18 05:58:06.725278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:45.874  
[2024-11-18T05:58:07.110Z] Copying: 512/512 [B] (average 166 kBps)
00:12:46.132  
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 8mbq0qgpal8l5u7tmoqfn3ojubxphs8yf3l3ygnqpjr15qkeghh2ecq1g8tf2y4rzcuzbpx85wy1r3kv121icqmlquy803eatm31qhu5vax0gz5fr23hp9j06wf670ymgp206kvet0s5vqymbtg8wg1vnt7ykxl53oiudfpgq4bkfp650jb3c63n6tw03o5lo101i8r4rybx02jafn8javmt6jxpqvknyjbq7yb3vr359hieid4ok7xc98sk1daqooku51pm26xgv0z4w20o10bv2nv1j5lgmyqqecpngd08u9r1enl6atpv0u812cisifzzjsxejcwpn0u6nco5qfpdsu22hu82bjmeao59lm35qdnkw7v4moh6mzprllnxb3euqxz0g70y2rnaikg5d335cv4o76pnvrpoixoa1p13g5g71n1dkzijinyed1sfu8nju548843ocblhd87d6gkzg8o66ljkg6oc6yg4krvjzs2c3rpbln0s4w04l34n == \8\m\b\q\0\q\g\p\a\l\8\l\5\u\7\t\m\o\q\f\n\3\o\j\u\b\x\p\h\s\8\y\f\3\l\3\y\g\n\q\p\j\r\1\5\q\k\e\g\h\h\2\e\c\q\1\g\8\t\f\2\y\4\r\z\c\u\z\b\p\x\8\5\w\y\1\r\3\k\v\1\2\1\i\c\q\m\l\q\u\y\8\0\3\e\a\t\m\3\1\q\h\u\5\v\a\x\0\g\z\5\f\r\2\3\h\p\9\j\0\6\w\f\6\7\0\y\m\g\p\2\0\6\k\v\e\t\0\s\5\v\q\y\m\b\t\g\8\w\g\1\v\n\t\7\y\k\x\l\5\3\o\i\u\d\f\p\g\q\4\b\k\f\p\6\5\0\j\b\3\c\6\3\n\6\t\w\0\3\o\5\l\o\1\0\1\i\8\r\4\r\y\b\x\0\2\j\a\f\n\8\j\a\v\m\t\6\j\x\p\q\v\k\n\y\j\b\q\7\y\b\3\v\r\3\5\9\h\i\e\i\d\4\o\k\7\x\c\9\8\s\k\1\d\a\q\o\o\k\u\5\1\p\m\2\6\x\g\v\0\z\4\w\2\0\o\1\0\b\v\2\n\v\1\j\5\l\g\m\y\q\q\e\c\p\n\g\d\0\8\u\9\r\1\e\n\l\6\a\t\p\v\0\u\8\1\2\c\i\s\i\f\z\z\j\s\x\e\j\c\w\p\n\0\u\6\n\c\o\5\q\f\p\d\s\u\2\2\h\u\8\2\b\j\m\e\a\o\5\9\l\m\3\5\q\d\n\k\w\7\v\4\m\o\h\6\m\z\p\r\l\l\n\x\b\3\e\u\q\x\z\0\g\7\0\y\2\r\n\a\i\k\g\5\d\3\3\5\c\v\4\o\7\6\p\n\v\r\p\o\i\x\o\a\1\p\1\3\g\5\g\7\1\n\1\d\k\z\i\j\i\n\y\e\d\1\s\f\u\8\n\j\u\5\4\8\8\4\3\o\c\b\l\h\d\8\7\d\6\g\k\z\g\8\o\6\6\l\j\k\g\6\o\c\6\y\g\4\k\r\v\j\z\s\2\c\3\r\p\b\l\n\0\s\4\w\0\4\l\3\4\n ]]
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:46.132   05:58:06 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:12:46.132  [2024-11-18 05:58:06.995363] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:46.132  [2024-11-18 05:58:06.995558] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84904 ]
00:12:46.391  [2024-11-18 05:58:07.148328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:46.391  [2024-11-18 05:58:07.169114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:46.391  
[2024-11-18T05:58:07.627Z] Copying: 512/512 [B] (average 500 kBps)
00:12:46.649  
00:12:46.649   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ eiaw33ygs863kasjbmy3wxvg78saqayjzxpejjn9trmif96dmui1v8e8s7nc5qlx69wq6csidc53x05x85nobrdctd844jboaoe65uyi8c4667jaxiqv23dcsbl90szgm3zlbvwrhdk0qf7twj6hbxtdao3vn5h9yoc57oyg5krzjg42mv6c1jev78rcjkaj1f79s8se6svjm8b7wkbtt4e1mxpp4csfdze29sr61obqzsc8l36bidpckv1lf4eyd8wxm5gkdhsbe1lxh7qoyztxzjbjvhf4fktgr496kyhgtr6yh7quwtxcj5ut05ek2hac6x5qxs7nvpg3k3ly76udlfh67no4ugxwnne4qgxj420v7vlapx2xw1l4n2hbfmnjvi2rtt506uq9qj206qxv62mymmwsbgd07da3u1ecykt8k1ksmydnu92kvc2zaqdtmrtla0phwgeiqq0f164alffzvhzi1ll8aypifhai2x9tgcjwt8290evophk2 == \e\i\a\w\3\3\y\g\s\8\6\3\k\a\s\j\b\m\y\3\w\x\v\g\7\8\s\a\q\a\y\j\z\x\p\e\j\j\n\9\t\r\m\i\f\9\6\d\m\u\i\1\v\8\e\8\s\7\n\c\5\q\l\x\6\9\w\q\6\c\s\i\d\c\5\3\x\0\5\x\8\5\n\o\b\r\d\c\t\d\8\4\4\j\b\o\a\o\e\6\5\u\y\i\8\c\4\6\6\7\j\a\x\i\q\v\2\3\d\c\s\b\l\9\0\s\z\g\m\3\z\l\b\v\w\r\h\d\k\0\q\f\7\t\w\j\6\h\b\x\t\d\a\o\3\v\n\5\h\9\y\o\c\5\7\o\y\g\5\k\r\z\j\g\4\2\m\v\6\c\1\j\e\v\7\8\r\c\j\k\a\j\1\f\7\9\s\8\s\e\6\s\v\j\m\8\b\7\w\k\b\t\t\4\e\1\m\x\p\p\4\c\s\f\d\z\e\2\9\s\r\6\1\o\b\q\z\s\c\8\l\3\6\b\i\d\p\c\k\v\1\l\f\4\e\y\d\8\w\x\m\5\g\k\d\h\s\b\e\1\l\x\h\7\q\o\y\z\t\x\z\j\b\j\v\h\f\4\f\k\t\g\r\4\9\6\k\y\h\g\t\r\6\y\h\7\q\u\w\t\x\c\j\5\u\t\0\5\e\k\2\h\a\c\6\x\5\q\x\s\7\n\v\p\g\3\k\3\l\y\7\6\u\d\l\f\h\6\7\n\o\4\u\g\x\w\n\n\e\4\q\g\x\j\4\2\0\v\7\v\l\a\p\x\2\x\w\1\l\4\n\2\h\b\f\m\n\j\v\i\2\r\t\t\5\0\6\u\q\9\q\j\2\0\6\q\x\v\6\2\m\y\m\m\w\s\b\g\d\0\7\d\a\3\u\1\e\c\y\k\t\8\k\1\k\s\m\y\d\n\u\9\2\k\v\c\2\z\a\q\d\t\m\r\t\l\a\0\p\h\w\g\e\i\q\q\0\f\1\6\4\a\l\f\f\z\v\h\z\i\1\l\l\8\a\y\p\i\f\h\a\i\2\x\9\t\g\c\j\w\t\8\2\9\0\e\v\o\p\h\k\2 ]]
00:12:46.649   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:46.649   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:12:46.649  [2024-11-18 05:58:07.440964] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:46.649  [2024-11-18 05:58:07.441176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84912 ]
00:12:46.649  [2024-11-18 05:58:07.593422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:46.649  [2024-11-18 05:58:07.613307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:46.908  
[2024-11-18T05:58:07.886Z] Copying: 512/512 [B] (average 500 kBps)
00:12:46.908  
00:12:46.908   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ eiaw33ygs863kasjbmy3wxvg78saqayjzxpejjn9trmif96dmui1v8e8s7nc5qlx69wq6csidc53x05x85nobrdctd844jboaoe65uyi8c4667jaxiqv23dcsbl90szgm3zlbvwrhdk0qf7twj6hbxtdao3vn5h9yoc57oyg5krzjg42mv6c1jev78rcjkaj1f79s8se6svjm8b7wkbtt4e1mxpp4csfdze29sr61obqzsc8l36bidpckv1lf4eyd8wxm5gkdhsbe1lxh7qoyztxzjbjvhf4fktgr496kyhgtr6yh7quwtxcj5ut05ek2hac6x5qxs7nvpg3k3ly76udlfh67no4ugxwnne4qgxj420v7vlapx2xw1l4n2hbfmnjvi2rtt506uq9qj206qxv62mymmwsbgd07da3u1ecykt8k1ksmydnu92kvc2zaqdtmrtla0phwgeiqq0f164alffzvhzi1ll8aypifhai2x9tgcjwt8290evophk2 == \e\i\a\w\3\3\y\g\s\8\6\3\k\a\s\j\b\m\y\3\w\x\v\g\7\8\s\a\q\a\y\j\z\x\p\e\j\j\n\9\t\r\m\i\f\9\6\d\m\u\i\1\v\8\e\8\s\7\n\c\5\q\l\x\6\9\w\q\6\c\s\i\d\c\5\3\x\0\5\x\8\5\n\o\b\r\d\c\t\d\8\4\4\j\b\o\a\o\e\6\5\u\y\i\8\c\4\6\6\7\j\a\x\i\q\v\2\3\d\c\s\b\l\9\0\s\z\g\m\3\z\l\b\v\w\r\h\d\k\0\q\f\7\t\w\j\6\h\b\x\t\d\a\o\3\v\n\5\h\9\y\o\c\5\7\o\y\g\5\k\r\z\j\g\4\2\m\v\6\c\1\j\e\v\7\8\r\c\j\k\a\j\1\f\7\9\s\8\s\e\6\s\v\j\m\8\b\7\w\k\b\t\t\4\e\1\m\x\p\p\4\c\s\f\d\z\e\2\9\s\r\6\1\o\b\q\z\s\c\8\l\3\6\b\i\d\p\c\k\v\1\l\f\4\e\y\d\8\w\x\m\5\g\k\d\h\s\b\e\1\l\x\h\7\q\o\y\z\t\x\z\j\b\j\v\h\f\4\f\k\t\g\r\4\9\6\k\y\h\g\t\r\6\y\h\7\q\u\w\t\x\c\j\5\u\t\0\5\e\k\2\h\a\c\6\x\5\q\x\s\7\n\v\p\g\3\k\3\l\y\7\6\u\d\l\f\h\6\7\n\o\4\u\g\x\w\n\n\e\4\q\g\x\j\4\2\0\v\7\v\l\a\p\x\2\x\w\1\l\4\n\2\h\b\f\m\n\j\v\i\2\r\t\t\5\0\6\u\q\9\q\j\2\0\6\q\x\v\6\2\m\y\m\m\w\s\b\g\d\0\7\d\a\3\u\1\e\c\y\k\t\8\k\1\k\s\m\y\d\n\u\9\2\k\v\c\2\z\a\q\d\t\m\r\t\l\a\0\p\h\w\g\e\i\q\q\0\f\1\6\4\a\l\f\f\z\v\h\z\i\1\l\l\8\a\y\p\i\f\h\a\i\2\x\9\t\g\c\j\w\t\8\2\9\0\e\v\o\p\h\k\2 ]]
00:12:46.908   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:46.908   05:58:07 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:12:47.167  [2024-11-18 05:58:07.900039] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:47.167  [2024-11-18 05:58:07.900239] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84921 ]
00:12:47.167  [2024-11-18 05:58:08.053936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:47.167  [2024-11-18 05:58:08.075588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:47.167  
[2024-11-18T05:58:08.404Z] Copying: 512/512 [B] (average 166 kBps)
00:12:47.426  
00:12:47.426   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ eiaw33ygs863kasjbmy3wxvg78saqayjzxpejjn9trmif96dmui1v8e8s7nc5qlx69wq6csidc53x05x85nobrdctd844jboaoe65uyi8c4667jaxiqv23dcsbl90szgm3zlbvwrhdk0qf7twj6hbxtdao3vn5h9yoc57oyg5krzjg42mv6c1jev78rcjkaj1f79s8se6svjm8b7wkbtt4e1mxpp4csfdze29sr61obqzsc8l36bidpckv1lf4eyd8wxm5gkdhsbe1lxh7qoyztxzjbjvhf4fktgr496kyhgtr6yh7quwtxcj5ut05ek2hac6x5qxs7nvpg3k3ly76udlfh67no4ugxwnne4qgxj420v7vlapx2xw1l4n2hbfmnjvi2rtt506uq9qj206qxv62mymmwsbgd07da3u1ecykt8k1ksmydnu92kvc2zaqdtmrtla0phwgeiqq0f164alffzvhzi1ll8aypifhai2x9tgcjwt8290evophk2 == \e\i\a\w\3\3\y\g\s\8\6\3\k\a\s\j\b\m\y\3\w\x\v\g\7\8\s\a\q\a\y\j\z\x\p\e\j\j\n\9\t\r\m\i\f\9\6\d\m\u\i\1\v\8\e\8\s\7\n\c\5\q\l\x\6\9\w\q\6\c\s\i\d\c\5\3\x\0\5\x\8\5\n\o\b\r\d\c\t\d\8\4\4\j\b\o\a\o\e\6\5\u\y\i\8\c\4\6\6\7\j\a\x\i\q\v\2\3\d\c\s\b\l\9\0\s\z\g\m\3\z\l\b\v\w\r\h\d\k\0\q\f\7\t\w\j\6\h\b\x\t\d\a\o\3\v\n\5\h\9\y\o\c\5\7\o\y\g\5\k\r\z\j\g\4\2\m\v\6\c\1\j\e\v\7\8\r\c\j\k\a\j\1\f\7\9\s\8\s\e\6\s\v\j\m\8\b\7\w\k\b\t\t\4\e\1\m\x\p\p\4\c\s\f\d\z\e\2\9\s\r\6\1\o\b\q\z\s\c\8\l\3\6\b\i\d\p\c\k\v\1\l\f\4\e\y\d\8\w\x\m\5\g\k\d\h\s\b\e\1\l\x\h\7\q\o\y\z\t\x\z\j\b\j\v\h\f\4\f\k\t\g\r\4\9\6\k\y\h\g\t\r\6\y\h\7\q\u\w\t\x\c\j\5\u\t\0\5\e\k\2\h\a\c\6\x\5\q\x\s\7\n\v\p\g\3\k\3\l\y\7\6\u\d\l\f\h\6\7\n\o\4\u\g\x\w\n\n\e\4\q\g\x\j\4\2\0\v\7\v\l\a\p\x\2\x\w\1\l\4\n\2\h\b\f\m\n\j\v\i\2\r\t\t\5\0\6\u\q\9\q\j\2\0\6\q\x\v\6\2\m\y\m\m\w\s\b\g\d\0\7\d\a\3\u\1\e\c\y\k\t\8\k\1\k\s\m\y\d\n\u\9\2\k\v\c\2\z\a\q\d\t\m\r\t\l\a\0\p\h\w\g\e\i\q\q\0\f\1\6\4\a\l\f\f\z\v\h\z\i\1\l\l\8\a\y\p\i\f\h\a\i\2\x\9\t\g\c\j\w\t\8\2\9\0\e\v\o\p\h\k\2 ]]
00:12:47.426   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:12:47.426   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:12:47.426  [2024-11-18 05:58:08.370142] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:47.426  [2024-11-18 05:58:08.370341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84924 ]
00:12:47.684  [2024-11-18 05:58:08.527917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:47.684  [2024-11-18 05:58:08.548265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:47.684  
[2024-11-18T05:58:08.921Z] Copying: 512/512 [B] (average 100 kBps)
00:12:47.943  
00:12:47.943   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ eiaw33ygs863kasjbmy3wxvg78saqayjzxpejjn9trmif96dmui1v8e8s7nc5qlx69wq6csidc53x05x85nobrdctd844jboaoe65uyi8c4667jaxiqv23dcsbl90szgm3zlbvwrhdk0qf7twj6hbxtdao3vn5h9yoc57oyg5krzjg42mv6c1jev78rcjkaj1f79s8se6svjm8b7wkbtt4e1mxpp4csfdze29sr61obqzsc8l36bidpckv1lf4eyd8wxm5gkdhsbe1lxh7qoyztxzjbjvhf4fktgr496kyhgtr6yh7quwtxcj5ut05ek2hac6x5qxs7nvpg3k3ly76udlfh67no4ugxwnne4qgxj420v7vlapx2xw1l4n2hbfmnjvi2rtt506uq9qj206qxv62mymmwsbgd07da3u1ecykt8k1ksmydnu92kvc2zaqdtmrtla0phwgeiqq0f164alffzvhzi1ll8aypifhai2x9tgcjwt8290evophk2 == \e\i\a\w\3\3\y\g\s\8\6\3\k\a\s\j\b\m\y\3\w\x\v\g\7\8\s\a\q\a\y\j\z\x\p\e\j\j\n\9\t\r\m\i\f\9\6\d\m\u\i\1\v\8\e\8\s\7\n\c\5\q\l\x\6\9\w\q\6\c\s\i\d\c\5\3\x\0\5\x\8\5\n\o\b\r\d\c\t\d\8\4\4\j\b\o\a\o\e\6\5\u\y\i\8\c\4\6\6\7\j\a\x\i\q\v\2\3\d\c\s\b\l\9\0\s\z\g\m\3\z\l\b\v\w\r\h\d\k\0\q\f\7\t\w\j\6\h\b\x\t\d\a\o\3\v\n\5\h\9\y\o\c\5\7\o\y\g\5\k\r\z\j\g\4\2\m\v\6\c\1\j\e\v\7\8\r\c\j\k\a\j\1\f\7\9\s\8\s\e\6\s\v\j\m\8\b\7\w\k\b\t\t\4\e\1\m\x\p\p\4\c\s\f\d\z\e\2\9\s\r\6\1\o\b\q\z\s\c\8\l\3\6\b\i\d\p\c\k\v\1\l\f\4\e\y\d\8\w\x\m\5\g\k\d\h\s\b\e\1\l\x\h\7\q\o\y\z\t\x\z\j\b\j\v\h\f\4\f\k\t\g\r\4\9\6\k\y\h\g\t\r\6\y\h\7\q\u\w\t\x\c\j\5\u\t\0\5\e\k\2\h\a\c\6\x\5\q\x\s\7\n\v\p\g\3\k\3\l\y\7\6\u\d\l\f\h\6\7\n\o\4\u\g\x\w\n\n\e\4\q\g\x\j\4\2\0\v\7\v\l\a\p\x\2\x\w\1\l\4\n\2\h\b\f\m\n\j\v\i\2\r\t\t\5\0\6\u\q\9\q\j\2\0\6\q\x\v\6\2\m\y\m\m\w\s\b\g\d\0\7\d\a\3\u\1\e\c\y\k\t\8\k\1\k\s\m\y\d\n\u\9\2\k\v\c\2\z\a\q\d\t\m\r\t\l\a\0\p\h\w\g\e\i\q\q\0\f\1\6\4\a\l\f\f\z\v\h\z\i\1\l\l\8\a\y\p\i\f\h\a\i\2\x\9\t\g\c\j\w\t\8\2\9\0\e\v\o\p\h\k\2 ]]
00:12:47.943  
00:12:47.943  real	0m3.636s
00:12:47.943  user	0m1.644s
00:12:47.943  sys	0m1.042s
00:12:47.943  ************************************
00:12:47.943  END TEST dd_flags_misc_forced_aio
00:12:47.943  ************************************
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:12:47.944  
00:12:47.944  real	0m17.401s
00:12:47.944  user	0m6.933s
00:12:47.944  sys	0m4.776s
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:47.944   05:58:08 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:12:47.944  ************************************
00:12:47.944  END TEST spdk_dd_posix
00:12:47.944  ************************************
00:12:47.944   05:58:08 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:12:47.944   05:58:08 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:47.944   05:58:08 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:47.944   05:58:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:12:47.944  ************************************
00:12:47.944  START TEST spdk_dd_malloc
00:12:47.944  ************************************
00:12:47.944   05:58:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:12:48.203  * Looking for test storage...
00:12:48.203  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:48.203     05:58:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:48.203      05:58:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version
00:12:48.203      05:58:08 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-:
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-:
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<'
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:48.203      05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:48.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:48.203  		--rc genhtml_branch_coverage=1
00:12:48.203  		--rc genhtml_function_coverage=1
00:12:48.203  		--rc genhtml_legend=1
00:12:48.203  		--rc geninfo_all_blocks=1
00:12:48.203  		--rc geninfo_unexecuted_blocks=1
00:12:48.203  		
00:12:48.203  		'
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:48.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:48.203  		--rc genhtml_branch_coverage=1
00:12:48.203  		--rc genhtml_function_coverage=1
00:12:48.203  		--rc genhtml_legend=1
00:12:48.203  		--rc geninfo_all_blocks=1
00:12:48.203  		--rc geninfo_unexecuted_blocks=1
00:12:48.203  		
00:12:48.203  		'
00:12:48.203     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:48.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:48.203  		--rc genhtml_branch_coverage=1
00:12:48.204  		--rc genhtml_function_coverage=1
00:12:48.204  		--rc genhtml_legend=1
00:12:48.204  		--rc geninfo_all_blocks=1
00:12:48.204  		--rc geninfo_unexecuted_blocks=1
00:12:48.204  		
00:12:48.204  		'
00:12:48.204     05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:48.204  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:48.204  		--rc genhtml_branch_coverage=1
00:12:48.204  		--rc genhtml_function_coverage=1
00:12:48.204  		--rc genhtml_legend=1
00:12:48.204  		--rc geninfo_all_blocks=1
00:12:48.204  		--rc geninfo_unexecuted_blocks=1
00:12:48.204  		
00:12:48.204  		'
00:12:48.204    05:58:09 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:48.204     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob
00:12:48.204     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:48.204     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:48.204     05:58:09 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # export PATH
00:12:48.204      05:58:09 spdk_dd.spdk_dd_malloc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:12:48.204  ************************************
00:12:48.204  START TEST dd_malloc_copy
00:12:48.204  ************************************
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512')
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512')
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1
00:12:48.204   05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62
00:12:48.204    05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf
00:12:48.204    05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:12:48.204    05:58:09 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:12:48.204  {
00:12:48.204    "subsystems": [
00:12:48.204      {
00:12:48.204        "subsystem": "bdev",
00:12:48.204        "config": [
00:12:48.204          {
00:12:48.204            "params": {
00:12:48.204              "block_size": 512,
00:12:48.204              "num_blocks": 1048576,
00:12:48.204              "name": "malloc0"
00:12:48.204            },
00:12:48.204            "method": "bdev_malloc_create"
00:12:48.204          },
00:12:48.204          {
00:12:48.204            "params": {
00:12:48.204              "block_size": 512,
00:12:48.204              "num_blocks": 1048576,
00:12:48.204              "name": "malloc1"
00:12:48.204            },
00:12:48.204            "method": "bdev_malloc_create"
00:12:48.204          },
00:12:48.204          {
00:12:48.204            "method": "bdev_wait_for_examine"
00:12:48.204          }
00:12:48.204        ]
00:12:48.204      }
00:12:48.204    ]
00:12:48.204  }
00:12:48.204  [2024-11-18 05:58:09.117563] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:48.204  [2024-11-18 05:58:09.117970] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85007 ]
00:12:48.465  [2024-11-18 05:58:09.272295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:48.465  [2024-11-18 05:58:09.292322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:49.841  
[2024-11-18T05:58:11.754Z] Copying: 174/512 [MB] (174 MBps)
[2024-11-18T05:58:12.690Z] Copying: 352/512 [MB] (177 MBps)
[2024-11-18T05:58:12.949Z] Copying: 512/512 [MB] (average 175 MBps)
00:12:51.971  
00:12:51.971   05:58:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62
00:12:51.971    05:58:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf
00:12:51.971    05:58:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:12:51.971    05:58:12 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:12:51.971  {
00:12:51.971    "subsystems": [
00:12:51.971      {
00:12:51.971        "subsystem": "bdev",
00:12:51.971        "config": [
00:12:51.971          {
00:12:51.971            "params": {
00:12:51.971              "block_size": 512,
00:12:51.971              "num_blocks": 1048576,
00:12:51.971              "name": "malloc0"
00:12:51.971            },
00:12:51.971            "method": "bdev_malloc_create"
00:12:51.971          },
00:12:51.971          {
00:12:51.971            "params": {
00:12:51.971              "block_size": 512,
00:12:51.971              "num_blocks": 1048576,
00:12:51.971              "name": "malloc1"
00:12:51.971            },
00:12:51.971            "method": "bdev_malloc_create"
00:12:51.971          },
00:12:51.971          {
00:12:51.971            "method": "bdev_wait_for_examine"
00:12:51.971          }
00:12:51.971        ]
00:12:51.971      }
00:12:51.971    ]
00:12:51.971  }
00:12:51.971  [2024-11-18 05:58:12.859195] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:51.971  [2024-11-18 05:58:12.859546] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85060 ]
00:12:52.231  [2024-11-18 05:58:13.004950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:52.231  [2024-11-18 05:58:13.025412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:53.609  
[2024-11-18T05:58:15.524Z] Copying: 173/512 [MB] (173 MBps)
[2024-11-18T05:58:16.460Z] Copying: 345/512 [MB] (172 MBps)
[2024-11-18T05:58:16.719Z] Copying: 512/512 [MB] (average 173 MBps)
00:12:55.741  
00:12:55.741  
00:12:55.741  real	0m7.529s
00:12:55.741  user	0m6.689s
00:12:55.741  sys	0m0.637s
00:12:55.741  ************************************
00:12:55.741  END TEST dd_malloc_copy
00:12:55.741  ************************************
00:12:55.741   05:58:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:55.741   05:58:16 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:12:55.741  ************************************
00:12:55.741  END TEST spdk_dd_malloc
00:12:55.741  ************************************
00:12:55.741  
00:12:55.741  real	0m7.762s
00:12:55.741  user	0m6.812s
00:12:55.741  sys	0m0.750s
00:12:55.741   05:58:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:55.741   05:58:16 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:12:55.741   05:58:16 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:12:55.741   05:58:16 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:55.741   05:58:16 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:55.741   05:58:16 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:12:55.741  ************************************
00:12:55.741  START TEST spdk_dd_bdev_to_bdev
00:12:55.741  ************************************
00:12:55.741   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:12:56.000  * Looking for test storage...
00:12:56.000  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-:
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-:
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<'
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:56.000      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:56.000     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:56.000  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.000  		--rc genhtml_branch_coverage=1
00:12:56.001  		--rc genhtml_function_coverage=1
00:12:56.001  		--rc genhtml_legend=1
00:12:56.001  		--rc geninfo_all_blocks=1
00:12:56.001  		--rc geninfo_unexecuted_blocks=1
00:12:56.001  		
00:12:56.001  		'
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:56.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.001  		--rc genhtml_branch_coverage=1
00:12:56.001  		--rc genhtml_function_coverage=1
00:12:56.001  		--rc genhtml_legend=1
00:12:56.001  		--rc geninfo_all_blocks=1
00:12:56.001  		--rc geninfo_unexecuted_blocks=1
00:12:56.001  		
00:12:56.001  		'
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:56.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.001  		--rc genhtml_branch_coverage=1
00:12:56.001  		--rc genhtml_function_coverage=1
00:12:56.001  		--rc genhtml_legend=1
00:12:56.001  		--rc geninfo_all_blocks=1
00:12:56.001  		--rc geninfo_unexecuted_blocks=1
00:12:56.001  		
00:12:56.001  		'
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:56.001  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:56.001  		--rc genhtml_branch_coverage=1
00:12:56.001  		--rc genhtml_function_coverage=1
00:12:56.001  		--rc genhtml_legend=1
00:12:56.001  		--rc geninfo_all_blocks=1
00:12:56.001  		--rc geninfo_unexecuted_blocks=1
00:12:56.001  		
00:12:56.001  		'
00:12:56.001    05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:56.001     05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # export PATH
00:12:56.001      05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@")
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 ))
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096')
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0
00:12:56.001   05:58:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256
00:12:56.001  [2024-11-18 05:58:16.917450] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:56.001  [2024-11-18 05:58:16.917706] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85173 ]
00:12:56.260  [2024-11-18 05:58:17.078908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:56.261  [2024-11-18 05:58:17.101086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:56.520  
[2024-11-18T05:58:17.498Z] Copying: 256/256 [MB] (average 1610 MBps)
00:12:56.520  
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it'
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it'
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:56.520   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:12:56.779  ************************************
00:12:56.779  START TEST dd_inflate_file
00:12:56.779  ************************************
00:12:56.779   05:58:17 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:12:56.779  [2024-11-18 05:58:17.556595] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:56.779  [2024-11-18 05:58:17.557015] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85183 ]
00:12:56.779  [2024-11-18 05:58:17.711226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:56.779  [2024-11-18 05:58:17.735677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:57.037  
[2024-11-18T05:58:18.015Z] Copying: 64/64 [MB] (average 1488 MBps)
00:12:57.037  
00:12:57.037  ************************************
00:12:57.037  END TEST dd_inflate_file
00:12:57.037  ************************************
00:12:57.037  
00:12:57.037  real	0m0.501s
00:12:57.037  user	0m0.207s
00:12:57.037  sys	0m0.181s
00:12:57.037   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:57.037   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x
00:12:57.296    05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:12:57.296    05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:12:57.296    05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:12:57.296    05:58:18 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:12:57.296  ************************************
00:12:57.296  START TEST dd_copy_to_out_bdev
00:12:57.296  ************************************
00:12:57.296   05:58:18 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:12:57.296  {
00:12:57.296    "subsystems": [
00:12:57.296      {
00:12:57.296        "subsystem": "bdev",
00:12:57.296        "config": [
00:12:57.296          {
00:12:57.296            "params": {
00:12:57.296              "block_size": 4096,
00:12:57.296              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:12:57.296              "name": "aio1"
00:12:57.296            },
00:12:57.296            "method": "bdev_aio_create"
00:12:57.296          },
00:12:57.296          {
00:12:57.296            "params": {
00:12:57.296              "trtype": "pcie",
00:12:57.296              "traddr": "0000:00:10.0",
00:12:57.296              "name": "Nvme0"
00:12:57.296            },
00:12:57.296            "method": "bdev_nvme_attach_controller"
00:12:57.296          },
00:12:57.296          {
00:12:57.296            "method": "bdev_wait_for_examine"
00:12:57.296          }
00:12:57.296        ]
00:12:57.296      }
00:12:57.296    ]
00:12:57.296  }
00:12:57.296  [2024-11-18 05:58:18.121681] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:57.296  [2024-11-18 05:58:18.121915] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85216 ]
00:12:57.567  [2024-11-18 05:58:18.279638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:57.567  [2024-11-18 05:58:18.300992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:58.519  
[2024-11-18T05:58:20.065Z] Copying: 44/64 [MB] (44 MBps)
[2024-11-18T05:58:20.324Z] Copying: 64/64 [MB] (average 44 MBps)
00:12:59.346  
00:12:59.346  ************************************
00:12:59.346  END TEST dd_copy_to_out_bdev
00:12:59.346  ************************************
00:12:59.346  
00:12:59.346  real	0m2.043s
00:12:59.346  user	0m1.706s
00:12:59.346  sys	0m0.220s
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:12:59.346  ************************************
00:12:59.346  START TEST dd_offset_magic
00:12:59.346  ************************************
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64)
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:12:59.346   05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62
00:12:59.346    05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:12:59.346    05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:12:59.346    05:58:20 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:12:59.346  {
00:12:59.346    "subsystems": [
00:12:59.346      {
00:12:59.346        "subsystem": "bdev",
00:12:59.346        "config": [
00:12:59.346          {
00:12:59.346            "params": {
00:12:59.346              "block_size": 4096,
00:12:59.346              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:12:59.346              "name": "aio1"
00:12:59.346            },
00:12:59.346            "method": "bdev_aio_create"
00:12:59.346          },
00:12:59.346          {
00:12:59.346            "params": {
00:12:59.346              "trtype": "pcie",
00:12:59.346              "traddr": "0000:00:10.0",
00:12:59.346              "name": "Nvme0"
00:12:59.346            },
00:12:59.346            "method": "bdev_nvme_attach_controller"
00:12:59.346          },
00:12:59.346          {
00:12:59.346            "method": "bdev_wait_for_examine"
00:12:59.346          }
00:12:59.346        ]
00:12:59.346      }
00:12:59.346    ]
00:12:59.346  }
00:12:59.346  [2024-11-18 05:58:20.221248] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:12:59.346  [2024-11-18 05:58:20.221446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85261 ]
00:12:59.605  [2024-11-18 05:58:20.374337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:59.605  [2024-11-18 05:58:20.395582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.174  
[2024-11-18T05:58:21.410Z] Copying: 65/65 [MB] (average 144 MBps)
00:13:00.432  
00:13:00.433   05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62
00:13:00.433    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:13:00.433    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:00.433    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:00.433  {
00:13:00.433    "subsystems": [
00:13:00.433      {
00:13:00.433        "subsystem": "bdev",
00:13:00.433        "config": [
00:13:00.433          {
00:13:00.433            "params": {
00:13:00.433              "block_size": 4096,
00:13:00.433              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:00.433              "name": "aio1"
00:13:00.433            },
00:13:00.433            "method": "bdev_aio_create"
00:13:00.433          },
00:13:00.433          {
00:13:00.433            "params": {
00:13:00.433              "trtype": "pcie",
00:13:00.433              "traddr": "0000:00:10.0",
00:13:00.433              "name": "Nvme0"
00:13:00.433            },
00:13:00.433            "method": "bdev_nvme_attach_controller"
00:13:00.433          },
00:13:00.433          {
00:13:00.433            "method": "bdev_wait_for_examine"
00:13:00.433          }
00:13:00.433        ]
00:13:00.433      }
00:13:00.433    ]
00:13:00.433  }
00:13:00.433  [2024-11-18 05:58:21.269684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:00.433  [2024-11-18 05:58:21.269918] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85282 ]
00:13:00.691  [2024-11-18 05:58:21.429382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:00.691  [2024-11-18 05:58:21.455752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.691  
[2024-11-18T05:58:21.929Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:13:00.951  
00:13:00.951   05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:13:00.951   05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:13:00.951   05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:13:00.951   05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62
00:13:00.951    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:13:00.951    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:00.951    05:58:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:00.951  {
00:13:00.951    "subsystems": [
00:13:00.951      {
00:13:00.951        "subsystem": "bdev",
00:13:00.951        "config": [
00:13:00.951          {
00:13:00.951            "params": {
00:13:00.951              "block_size": 4096,
00:13:00.951              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:00.951              "name": "aio1"
00:13:00.951            },
00:13:00.951            "method": "bdev_aio_create"
00:13:00.951          },
00:13:00.951          {
00:13:00.951            "params": {
00:13:00.951              "trtype": "pcie",
00:13:00.951              "traddr": "0000:00:10.0",
00:13:00.951              "name": "Nvme0"
00:13:00.951            },
00:13:00.951            "method": "bdev_nvme_attach_controller"
00:13:00.951          },
00:13:00.951          {
00:13:00.951            "method": "bdev_wait_for_examine"
00:13:00.951          }
00:13:00.951        ]
00:13:00.951      }
00:13:00.951    ]
00:13:00.951  }
00:13:00.951  [2024-11-18 05:58:21.905259] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:00.951  [2024-11-18 05:58:21.905627] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85303 ]
00:13:01.210  [2024-11-18 05:58:22.067370] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:01.210  [2024-11-18 05:58:22.092383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:01.776  
[2024-11-18T05:58:23.011Z] Copying: 65/65 [MB] (average 187 MBps)
00:13:02.033  
00:13:02.033   05:58:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62
00:13:02.033    05:58:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:13:02.034    05:58:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:02.034    05:58:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:02.034  {
00:13:02.034    "subsystems": [
00:13:02.034      {
00:13:02.034        "subsystem": "bdev",
00:13:02.034        "config": [
00:13:02.034          {
00:13:02.034            "params": {
00:13:02.034              "block_size": 4096,
00:13:02.034              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:02.034              "name": "aio1"
00:13:02.034            },
00:13:02.034            "method": "bdev_aio_create"
00:13:02.034          },
00:13:02.034          {
00:13:02.034            "params": {
00:13:02.034              "trtype": "pcie",
00:13:02.034              "traddr": "0000:00:10.0",
00:13:02.034              "name": "Nvme0"
00:13:02.034            },
00:13:02.034            "method": "bdev_nvme_attach_controller"
00:13:02.034          },
00:13:02.034          {
00:13:02.034            "method": "bdev_wait_for_examine"
00:13:02.034          }
00:13:02.034        ]
00:13:02.034      }
00:13:02.034    ]
00:13:02.034  }
00:13:02.034  [2024-11-18 05:58:22.865374] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:02.034  [2024-11-18 05:58:22.865572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85313 ]
00:13:02.292  [2024-11-18 05:58:23.025399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:02.292  [2024-11-18 05:58:23.050587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:02.292  
[2024-11-18T05:58:23.528Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:13:02.550  
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:13:02.550  
00:13:02.550  real	0m3.282s
00:13:02.550  user	0m1.361s
00:13:02.550  sys	0m0.865s
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:02.550  ************************************
00:13:02.550  END TEST dd_offset_magic
00:13:02.550  ************************************
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:13:02.550   05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62
00:13:02.550    05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:13:02.550    05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:02.550    05:58:23 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:02.550  {
00:13:02.550    "subsystems": [
00:13:02.550      {
00:13:02.550        "subsystem": "bdev",
00:13:02.550        "config": [
00:13:02.550          {
00:13:02.550            "params": {
00:13:02.550              "block_size": 4096,
00:13:02.550              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:02.550              "name": "aio1"
00:13:02.550            },
00:13:02.550            "method": "bdev_aio_create"
00:13:02.550          },
00:13:02.550          {
00:13:02.550            "params": {
00:13:02.550              "trtype": "pcie",
00:13:02.550              "traddr": "0000:00:10.0",
00:13:02.550              "name": "Nvme0"
00:13:02.550            },
00:13:02.550            "method": "bdev_nvme_attach_controller"
00:13:02.550          },
00:13:02.550          {
00:13:02.550            "method": "bdev_wait_for_examine"
00:13:02.550          }
00:13:02.550        ]
00:13:02.550      }
00:13:02.550    ]
00:13:02.550  }
00:13:02.808  [2024-11-18 05:58:23.548690] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:02.808  [2024-11-18 05:58:23.549089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85349 ]
00:13:02.808  [2024-11-18 05:58:23.706216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:02.808  [2024-11-18 05:58:23.731848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:03.067  
[2024-11-18T05:58:24.303Z] Copying: 5120/5120 [kB] (average 1250 MBps)
00:13:03.325  
00:13:03.325   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:13:03.326   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62
00:13:03.326    05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:13:03.326    05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:03.326    05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:03.326  {
00:13:03.326    "subsystems": [
00:13:03.326      {
00:13:03.326        "subsystem": "bdev",
00:13:03.326        "config": [
00:13:03.326          {
00:13:03.326            "params": {
00:13:03.326              "block_size": 4096,
00:13:03.326              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:03.326              "name": "aio1"
00:13:03.326            },
00:13:03.326            "method": "bdev_aio_create"
00:13:03.326          },
00:13:03.326          {
00:13:03.326            "params": {
00:13:03.326              "trtype": "pcie",
00:13:03.326              "traddr": "0000:00:10.0",
00:13:03.326              "name": "Nvme0"
00:13:03.326            },
00:13:03.326            "method": "bdev_nvme_attach_controller"
00:13:03.326          },
00:13:03.326          {
00:13:03.326            "method": "bdev_wait_for_examine"
00:13:03.326          }
00:13:03.326        ]
00:13:03.326      }
00:13:03.326    ]
00:13:03.326  }
00:13:03.326  [2024-11-18 05:58:24.180251] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:03.326  [2024-11-18 05:58:24.180411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85370 ]
00:13:03.584  [2024-11-18 05:58:24.340403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:03.584  [2024-11-18 05:58:24.365559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:03.584  
[2024-11-18T05:58:24.821Z] Copying: 5120/5120 [kB] (average 238 MBps)
00:13:03.843  
00:13:03.843   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1
00:13:04.102  ************************************
00:13:04.102  END TEST spdk_dd_bdev_to_bdev
00:13:04.102  ************************************
00:13:04.102  
00:13:04.102  real	0m8.158s
00:13:04.102  user	0m4.344s
00:13:04.102  sys	0m2.163s
00:13:04.103   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:04.103   05:58:24 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:04.103   05:58:24 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 ))
00:13:04.103   05:58:24 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:13:04.103   05:58:24 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:04.103   05:58:24 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.103   05:58:24 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:04.103  ************************************
00:13:04.103  START TEST spdk_dd_sparse
00:13:04.103  ************************************
00:13:04.103   05:58:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:13:04.103  * Looking for test storage...
00:13:04.103  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:04.103     05:58:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:04.103      05:58:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version
00:13:04.103      05:58:24 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-:
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-:
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:04.103  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.103  		--rc genhtml_branch_coverage=1
00:13:04.103  		--rc genhtml_function_coverage=1
00:13:04.103  		--rc genhtml_legend=1
00:13:04.103  		--rc geninfo_all_blocks=1
00:13:04.103  		--rc geninfo_unexecuted_blocks=1
00:13:04.103  		
00:13:04.103  		'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:04.103  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.103  		--rc genhtml_branch_coverage=1
00:13:04.103  		--rc genhtml_function_coverage=1
00:13:04.103  		--rc genhtml_legend=1
00:13:04.103  		--rc geninfo_all_blocks=1
00:13:04.103  		--rc geninfo_unexecuted_blocks=1
00:13:04.103  		
00:13:04.103  		'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:04.103  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.103  		--rc genhtml_branch_coverage=1
00:13:04.103  		--rc genhtml_function_coverage=1
00:13:04.103  		--rc genhtml_legend=1
00:13:04.103  		--rc geninfo_all_blocks=1
00:13:04.103  		--rc geninfo_unexecuted_blocks=1
00:13:04.103  		
00:13:04.103  		'
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:04.103  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.103  		--rc genhtml_branch_coverage=1
00:13:04.103  		--rc genhtml_function_coverage=1
00:13:04.103  		--rc genhtml_legend=1
00:13:04.103  		--rc geninfo_all_blocks=1
00:13:04.103  		--rc geninfo_unexecuted_blocks=1
00:13:04.103  		
00:13:04.103  		'
00:13:04.103    05:58:25 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:04.103     05:58:25 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # export PATH
00:13:04.103      05:58:25 spdk_dd.spdk_dd_sparse -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600
00:13:04.103   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1
00:13:04.363  1+0 records in
00:13:04.363  1+0 records out
00:13:04.363  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00744148 s, 564 MB/s
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4
00:13:04.363  1+0 records in
00:13:04.363  1+0 records out
00:13:04.363  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00683591 s, 614 MB/s
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8
00:13:04.363  1+0 records in
00:13:04.363  1+0 records out
00:13:04.363  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00724703 s, 579 MB/s
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:04.363  ************************************
00:13:04.363  START TEST dd_sparse_file_to_file
00:13:04.363  ************************************
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore')
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1
00:13:04.363   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62
00:13:04.363    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf
00:13:04.363    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable
00:13:04.363    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:04.363  {
00:13:04.363    "subsystems": [
00:13:04.363      {
00:13:04.363        "subsystem": "bdev",
00:13:04.363        "config": [
00:13:04.363          {
00:13:04.363            "params": {
00:13:04.363              "block_size": 4096,
00:13:04.363              "filename": "dd_sparse_aio_disk",
00:13:04.363              "name": "dd_aio"
00:13:04.363            },
00:13:04.363            "method": "bdev_aio_create"
00:13:04.363          },
00:13:04.363          {
00:13:04.363            "params": {
00:13:04.363              "lvs_name": "dd_lvstore",
00:13:04.363              "bdev_name": "dd_aio"
00:13:04.363            },
00:13:04.363            "method": "bdev_lvol_create_lvstore"
00:13:04.363          },
00:13:04.363          {
00:13:04.363            "method": "bdev_wait_for_examine"
00:13:04.363          }
00:13:04.363        ]
00:13:04.363      }
00:13:04.363    ]
00:13:04.363  }
00:13:04.363  [2024-11-18 05:58:25.180087] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:04.363  [2024-11-18 05:58:25.180302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85446 ]
00:13:04.363  [2024-11-18 05:58:25.341096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:04.622  [2024-11-18 05:58:25.367403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:04.622  
[2024-11-18T05:58:25.859Z] Copying: 12/36 [MB] (average 1000 MBps)
00:13:04.881  
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]]
00:13:04.881  
00:13:04.881  real	0m0.614s
00:13:04.881  user	0m0.297s
00:13:04.881  sys	0m0.198s
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:04.881  ************************************
00:13:04.881  END TEST dd_sparse_file_to_file
00:13:04.881  ************************************
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:04.881  ************************************
00:13:04.881  START TEST dd_sparse_file_to_bdev
00:13:04.881  ************************************
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true')
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1
00:13:04.881   05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:04.881    05:58:25 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:04.881  {
00:13:04.881    "subsystems": [
00:13:04.881      {
00:13:04.881        "subsystem": "bdev",
00:13:04.881        "config": [
00:13:04.881          {
00:13:04.881            "params": {
00:13:04.881              "block_size": 4096,
00:13:04.881              "filename": "dd_sparse_aio_disk",
00:13:04.881              "name": "dd_aio"
00:13:04.881            },
00:13:04.881            "method": "bdev_aio_create"
00:13:04.881          },
00:13:04.881          {
00:13:04.881            "params": {
00:13:04.881              "lvs_name": "dd_lvstore",
00:13:04.881              "lvol_name": "dd_lvol",
00:13:04.881              "size_in_mib": 36,
00:13:04.881              "thin_provision": true
00:13:04.881            },
00:13:04.881            "method": "bdev_lvol_create"
00:13:04.881          },
00:13:04.881          {
00:13:04.881            "method": "bdev_wait_for_examine"
00:13:04.881          }
00:13:04.881        ]
00:13:04.881      }
00:13:04.881    ]
00:13:04.881  }
00:13:04.881  [2024-11-18 05:58:25.842257] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:04.881  [2024-11-18 05:58:25.842439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85482 ]
00:13:05.140  [2024-11-18 05:58:26.003788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:05.140  [2024-11-18 05:58:26.028930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:05.399  
[2024-11-18T05:58:26.377Z] Copying: 12/36 [MB] (average 545 MBps)
00:13:05.399  
00:13:05.399  
00:13:05.399  real	0m0.590s
00:13:05.399  user	0m0.295s
00:13:05.399  sys	0m0.186s
00:13:05.399   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:05.399   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:05.399  ************************************
00:13:05.399  END TEST dd_sparse_file_to_bdev
00:13:05.399  ************************************
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:05.658  ************************************
00:13:05.658  START TEST dd_sparse_bdev_to_file
00:13:05.658  ************************************
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0
00:13:05.658   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62
00:13:05.658    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf
00:13:05.658    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable
00:13:05.658    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:05.658  {
00:13:05.658    "subsystems": [
00:13:05.658      {
00:13:05.658        "subsystem": "bdev",
00:13:05.658        "config": [
00:13:05.658          {
00:13:05.658            "params": {
00:13:05.658              "block_size": 4096,
00:13:05.658              "filename": "dd_sparse_aio_disk",
00:13:05.658              "name": "dd_aio"
00:13:05.658            },
00:13:05.658            "method": "bdev_aio_create"
00:13:05.658          },
00:13:05.658          {
00:13:05.658            "method": "bdev_wait_for_examine"
00:13:05.658          }
00:13:05.658        ]
00:13:05.658      }
00:13:05.658    ]
00:13:05.658  }
00:13:05.658  [2024-11-18 05:58:26.477873] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:05.658  [2024-11-18 05:58:26.478020] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85520 ]
00:13:05.658  [2024-11-18 05:58:26.628545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:05.917  [2024-11-18 05:58:26.654604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:05.917  
[2024-11-18T05:58:27.154Z] Copying: 12/36 [MB] (average 1000 MBps)
00:13:06.176  
00:13:06.176    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2
00:13:06.176   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736
00:13:06.176    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3
00:13:06.176   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736
00:13:06.176   05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:13:06.176    05:58:26 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576
00:13:06.176    05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]]
00:13:06.176  
00:13:06.176  real	0m0.582s
00:13:06.176  user	0m0.281s
00:13:06.176  sys	0m0.192s
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.176  ************************************
00:13:06.176  END TEST dd_sparse_bdev_to_file
00:13:06.176  ************************************
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3
00:13:06.176  
00:13:06.176  real	0m2.192s
00:13:06.176  user	0m1.040s
00:13:06.176  sys	0m0.819s
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.176  ************************************
00:13:06.176  END TEST spdk_dd_sparse
00:13:06.176  ************************************
00:13:06.176   05:58:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:06.176   05:58:27 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:13:06.176   05:58:27 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.176   05:58:27 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.176   05:58:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:06.176  ************************************
00:13:06.176  START TEST spdk_dd_negative
00:13:06.176  ************************************
00:13:06.176   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:13:06.436  * Looking for test storage...
00:13:06.436  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-:
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-:
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<'
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:06.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.436  		--rc genhtml_branch_coverage=1
00:13:06.436  		--rc genhtml_function_coverage=1
00:13:06.436  		--rc genhtml_legend=1
00:13:06.436  		--rc geninfo_all_blocks=1
00:13:06.436  		--rc geninfo_unexecuted_blocks=1
00:13:06.436  		
00:13:06.436  		'
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:06.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.436  		--rc genhtml_branch_coverage=1
00:13:06.436  		--rc genhtml_function_coverage=1
00:13:06.436  		--rc genhtml_legend=1
00:13:06.436  		--rc geninfo_all_blocks=1
00:13:06.436  		--rc geninfo_unexecuted_blocks=1
00:13:06.436  		
00:13:06.436  		'
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:06.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.436  		--rc genhtml_branch_coverage=1
00:13:06.436  		--rc genhtml_function_coverage=1
00:13:06.436  		--rc genhtml_legend=1
00:13:06.436  		--rc geninfo_all_blocks=1
00:13:06.436  		--rc geninfo_unexecuted_blocks=1
00:13:06.436  		
00:13:06.436  		'
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:06.436  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:06.436  		--rc genhtml_branch_coverage=1
00:13:06.436  		--rc genhtml_function_coverage=1
00:13:06.436  		--rc genhtml_legend=1
00:13:06.436  		--rc geninfo_all_blocks=1
00:13:06.436  		--rc geninfo_unexecuted_blocks=1
00:13:06.436  		
00:13:06.436  		'
00:13:06.436    05:58:27 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:06.436     05:58:27 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # export PATH
00:13:06.436      05:58:27 spdk_dd.spdk_dd_negative -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:06.436  ************************************
00:13:06.436  START TEST dd_invalid_arguments
00:13:06.436  ************************************
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:06.436   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.437    05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.437    05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.437   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:06.437  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options]
00:13:06.437  
00:13:06.437  CPU options:
00:13:06.437   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:13:06.437                                   (like [0,1,10])
00:13:06.437       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:13:06.437                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:13:06.437                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:13:06.437                             Within the group, '-' is used for range separator,
00:13:06.437                             ',' is used for single number separator.
00:13:06.437                             '( )' can be omitted for single element group,
00:13:06.437                             '@' can be omitted if cpus and lcores have the same value
00:13:06.437       --disable-cpumask-locks    Disable CPU core lock files.
00:13:06.437       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:13:06.437                             pollers in the app support interrupt mode)
00:13:06.437   -p, --main-core <id>      main (primary) core for DPDK
00:13:06.437  
00:13:06.437  Configuration options:
00:13:06.437   -c, --config, --json  <config>     JSON config file
00:13:06.437   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:13:06.437       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:13:06.437       --wait-for-rpc        wait for RPCs to initialize subsystems
00:13:06.437       --rpcs-allowed	   comma-separated list of permitted RPCS
00:13:06.437       --json-ignore-init-errors    don't exit on invalid config entry
00:13:06.437  
00:13:06.437  Memory options:
00:13:06.437       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:13:06.437       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:13:06.437       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:13:06.437   -R, --huge-unlink         unlink huge files after initialization
00:13:06.437   -n, --mem-channels <num>  number of memory channels used for DPDK
00:13:06.437   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:13:06.437       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:13:06.437       --no-huge             run without using hugepages
00:13:06.437       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:13:06.437   -i, --shm-id <id>         shared memory ID (optional)
00:13:06.437   -g, --single-file-segments   force creating just one hugetlbfs file
00:13:06.437  
00:13:06.437  PCI options:
00:13:06.437   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:13:06.437   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:13:06.437   -u, --no-pci              disable PCI access
00:13:06.437       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:13:06.437  
00:13:06.437  Log options:
00:13:06.437   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:13:06.437                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:13:06.437                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:13:06.437                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:13:06.437                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:13:06.437                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 
00:13:06.437                             nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 
00:13:06.437                             sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, vbdev_gpt, 
00:13:06.437                             vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 
00:13:06.437                             vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, 
00:13:06.437                             virtio_user, virtio_vfio_user, vmd)
00:13:06.437       --silence-noticelog   disable notice level logging to stderr
00:13:06.437  
00:13:06.437  Trace options:
00:13:06.437       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:13:06.437                                   setting 0 to disable trace (default 32768)
00:13:06.437                                   Tracepoints vary in size and can use more than one trace entry.
00:13:06.437   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:13:06.437               /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii='
00:13:06.437  [2024-11-18 05:58:27.406488] spdk_dd.c:1480:main: *ERROR*: Invalid arguments
00:13:06.696                group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 
00:13:06.696                             blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 
00:13:06.696                             bdev_raid, scheduler, all).
00:13:06.696                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:13:06.696                             a tracepoint group. First tpoint inside a group can be enabled by
00:13:06.697                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:13:06.697                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:13:06.697                             in /include/spdk_internal/trace_defs.h
00:13:06.697  
00:13:06.697  Other options:
00:13:06.697   -h, --help                show this usage
00:13:06.697   -v, --version             print SPDK version
00:13:06.697   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:13:06.697       --env-context         Opaque context for use of the env implementation
00:13:06.697  
00:13:06.697  Application specific:
00:13:06.697  [--------- DD Options ---------]
00:13:06.697   --if Input file. Must specify either --if or --ib.
00:13:06.697   --ib Input bdev. Must specifier either --if or --ib
00:13:06.697   --of Output file. Must specify either --of or --ob.
00:13:06.697   --ob Output bdev. Must specify either --of or --ob.
00:13:06.697   --iflag Input file flags.
00:13:06.697   --oflag Output file flags.
00:13:06.697   --bs I/O unit size (default: 4096)
00:13:06.697   --qd Queue depth (default: 2)
00:13:06.697   --count I/O unit count. The number of I/O units to copy. (default: all)
00:13:06.697   --skip Skip this many I/O units at start of input. (default: 0)
00:13:06.697   --seek Skip this many I/O units at start of output. (default: 0)
00:13:06.697   --aio Force usage of AIO. (by default io_uring is used if available)
00:13:06.697   --sparse Enable hole skipping in input target
00:13:06.697   Available iflag and oflag values:
00:13:06.697    append - append mode
00:13:06.697    direct - use direct I/O for data
00:13:06.697    directory - fail unless a directory
00:13:06.697    dsync - use synchronized I/O for data
00:13:06.697    noatime - do not update access time
00:13:06.697    noctty - do not assign controlling terminal from file
00:13:06.697    nofollow - do not follow symlinks
00:13:06.697    nonblock - use non-blocking I/O
00:13:06.697    sync - use synchronized I/O for data and metadata
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:06.697  
00:13:06.697  real	0m0.121s
00:13:06.697  user	0m0.063s
00:13:06.697  sys	0m0.058s
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x
00:13:06.697  ************************************
00:13:06.697  END TEST dd_invalid_arguments
00:13:06.697  ************************************
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:06.697  ************************************
00:13:06.697  START TEST dd_double_input
00:13:06.697  ************************************
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.697    05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.697    05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:06.697  [2024-11-18 05:58:27.576391] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both.
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:06.697  
00:13:06.697  real	0m0.115s
00:13:06.697  user	0m0.067s
00:13:06.697  sys	0m0.048s
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x
00:13:06.697  ************************************
00:13:06.697  END TEST dd_double_input
00:13:06.697  ************************************
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.697   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:06.957  ************************************
00:13:06.957  START TEST dd_double_output
00:13:06.957  ************************************
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957    05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957    05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:06.957  [2024-11-18 05:58:27.743346] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both.
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:06.957  
00:13:06.957  real	0m0.113s
00:13:06.957  user	0m0.060s
00:13:06.957  sys	0m0.053s
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x
00:13:06.957  ************************************
00:13:06.957  END TEST dd_double_output
00:13:06.957  ************************************
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:06.957  ************************************
00:13:06.957  START TEST dd_no_input
00:13:06.957  ************************************
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957    05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957    05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.957   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:06.957  [2024-11-18 05:58:27.909664] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:07.216  
00:13:07.216  real	0m0.109s
00:13:07.216  user	0m0.064s
00:13:07.216  sys	0m0.045s
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x
00:13:07.216  ************************************
00:13:07.216  END TEST dd_no_input
00:13:07.216  ************************************
00:13:07.216   05:58:27 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output
00:13:07.216   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:07.217  ************************************
00:13:07.217  START TEST dd_no_output
00:13:07.217  ************************************
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217    05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217    05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:07.217  [2024-11-18 05:58:28.067403] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:07.217  
00:13:07.217  real	0m0.107s
00:13:07.217  user	0m0.057s
00:13:07.217  sys	0m0.050s
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x
00:13:07.217  ************************************
00:13:07.217  END TEST dd_no_output
00:13:07.217  ************************************
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:07.217  ************************************
00:13:07.217  START TEST dd_wrong_blocksize
00:13:07.217  ************************************
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217    05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217    05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:07.217   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:07.476  [2024-11-18 05:58:28.226854] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:07.476  
00:13:07.476  real	0m0.111s
00:13:07.476  user	0m0.063s
00:13:07.476  sys	0m0.049s
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x
00:13:07.476  ************************************
00:13:07.476  END TEST dd_wrong_blocksize
00:13:07.476  ************************************
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:07.476  ************************************
00:13:07.476  START TEST dd_smaller_blocksize
00:13:07.476  ************************************
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.476    05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.476    05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:07.476   05:58:28 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:07.476  [2024-11-18 05:58:28.394073] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:07.476  [2024-11-18 05:58:28.394260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85752 ]
00:13:07.735  [2024-11-18 05:58:28.553951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:07.735  [2024-11-18 05:58:28.579928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:07.994  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:13:08.253  EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list
00:13:08.253  [2024-11-18 05:58:29.159677] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value
00:13:08.253  [2024-11-18 05:58:29.159787] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:08.512  [2024-11-18 05:58:29.236251] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:08.512  
00:13:08.512  real	0m0.981s
00:13:08.512  user	0m0.342s
00:13:08.512  sys	0m0.538s
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x
00:13:08.512  ************************************
00:13:08.512  END TEST dd_smaller_blocksize
00:13:08.512  ************************************
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:08.512  ************************************
00:13:08.512  START TEST dd_invalid_count
00:13:08.512  ************************************
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.512    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.512    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:08.512  [2024-11-18 05:58:29.416383] spdk_dd.c:1517:main: *ERROR*: Invalid --count value
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:08.512  
00:13:08.512  real	0m0.090s
00:13:08.512  user	0m0.044s
00:13:08.512  sys	0m0.047s
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:08.512  ************************************
00:13:08.512  END TEST dd_invalid_count
00:13:08.512   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x
00:13:08.512  ************************************
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:08.771  ************************************
00:13:08.771  START TEST dd_invalid_oflag
00:13:08.771  ************************************
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.771    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.771    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.771   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:08.772  [2024-11-18 05:58:29.569536] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:08.772  
00:13:08.772  real	0m0.112s
00:13:08.772  user	0m0.057s
00:13:08.772  sys	0m0.056s
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x
00:13:08.772  ************************************
00:13:08.772  END TEST dd_invalid_oflag
00:13:08.772  ************************************
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:08.772  ************************************
00:13:08.772  START TEST dd_invalid_iflag
00:13:08.772  ************************************
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.772    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.772    05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:08.772   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:08.772  [2024-11-18 05:58:29.721280] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:09.119  
00:13:09.119  real	0m0.090s
00:13:09.119  user	0m0.057s
00:13:09.119  sys	0m0.034s
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x
00:13:09.119  ************************************
00:13:09.119  END TEST dd_invalid_iflag
00:13:09.119  ************************************
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:09.119  ************************************
00:13:09.119  START TEST dd_unknown_flag
00:13:09.119  ************************************
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.119    05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.119    05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:09.119   05:58:29 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:09.119  [2024-11-18 05:58:29.874452] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:09.119  [2024-11-18 05:58:29.874691] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85853 ]
00:13:09.119  [2024-11-18 05:58:30.026695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:09.119  [2024-11-18 05:58:30.047855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:09.378  [2024-11-18 05:58:30.095365] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1
00:13:09.378  [2024-11-18 05:58:30.095479] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:09.378  
[2024-11-18T05:58:30.356Z] Copying: 0/0 [B] (average 0 Bps)[2024-11-18 05:58:30.095695] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice
00:13:09.378  [2024-11-18 05:58:30.168145] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:09.378  
00:13:09.378  
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:09.378  
00:13:09.378  real	0m0.448s
00:13:09.378  user	0m0.191s
00:13:09.378  sys	0m0.136s
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x
00:13:09.378  ************************************
00:13:09.378  END TEST dd_unknown_flag
00:13:09.378  ************************************
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:09.378  ************************************
00:13:09.378  START TEST dd_invalid_json
00:13:09.378  ************************************
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0
00:13:09.378    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # :
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.378    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.378    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:09.378   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:09.637  [2024-11-18 05:58:30.377684] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:09.637  [2024-11-18 05:58:30.377923] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85882 ]
00:13:09.637  [2024-11-18 05:58:30.539439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:09.637  [2024-11-18 05:58:30.565776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:09.637  [2024-11-18 05:58:30.565930] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty
00:13:09.637  [2024-11-18 05:58:30.565967] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:09.637  [2024-11-18 05:58:30.565995] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:09.637  [2024-11-18 05:58:30.566092] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:09.897  
00:13:09.897  real	0m0.349s
00:13:09.897  user	0m0.153s
00:13:09.897  sys	0m0.097s
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x
00:13:09.897  ************************************
00:13:09.897  END TEST dd_invalid_json
00:13:09.897  ************************************
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:09.897  ************************************
00:13:09.897  START TEST dd_invalid_seek
00:13:09.897  ************************************
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:09.897    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.897    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable
00:13:09.897    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.897    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.897    05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:09.897   05:58:30 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:09.897  {
00:13:09.897    "subsystems": [
00:13:09.897      {
00:13:09.897        "subsystem": "bdev",
00:13:09.897        "config": [
00:13:09.897          {
00:13:09.897            "params": {
00:13:09.897              "block_size": 512,
00:13:09.897              "num_blocks": 512,
00:13:09.897              "name": "malloc0"
00:13:09.897            },
00:13:09.897            "method": "bdev_malloc_create"
00:13:09.897          },
00:13:09.897          {
00:13:09.897            "params": {
00:13:09.897              "block_size": 512,
00:13:09.897              "num_blocks": 512,
00:13:09.897              "name": "malloc1"
00:13:09.897            },
00:13:09.897            "method": "bdev_malloc_create"
00:13:09.897          },
00:13:09.897          {
00:13:09.897            "method": "bdev_wait_for_examine"
00:13:09.897          }
00:13:09.897        ]
00:13:09.897      }
00:13:09.897    ]
00:13:09.897  }
00:13:09.897  [2024-11-18 05:58:30.776003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:09.897  [2024-11-18 05:58:30.776194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85911 ]
00:13:10.157  [2024-11-18 05:58:30.936796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:10.157  [2024-11-18 05:58:30.961820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:10.157  [2024-11-18 05:58:31.045347] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output
00:13:10.157  [2024-11-18 05:58:31.045433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:10.157  [2024-11-18 05:58:31.121627] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:10.417  
00:13:10.417  real	0m0.483s
00:13:10.417  user	0m0.250s
00:13:10.417  sys	0m0.156s
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:13:10.417  ************************************
00:13:10.417  END TEST dd_invalid_seek
00:13:10.417  ************************************
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:10.417  ************************************
00:13:10.417  START TEST dd_invalid_skip
00:13:10.417  ************************************
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:10.417    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.417    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable
00:13:10.417    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.417    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.417    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:10.417   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:10.417  {
00:13:10.417    "subsystems": [
00:13:10.417      {
00:13:10.417        "subsystem": "bdev",
00:13:10.417        "config": [
00:13:10.417          {
00:13:10.417            "params": {
00:13:10.417              "block_size": 512,
00:13:10.417              "num_blocks": 512,
00:13:10.417              "name": "malloc0"
00:13:10.417            },
00:13:10.417            "method": "bdev_malloc_create"
00:13:10.417          },
00:13:10.417          {
00:13:10.417            "params": {
00:13:10.417              "block_size": 512,
00:13:10.417              "num_blocks": 512,
00:13:10.417              "name": "malloc1"
00:13:10.417            },
00:13:10.417            "method": "bdev_malloc_create"
00:13:10.417          },
00:13:10.417          {
00:13:10.417            "method": "bdev_wait_for_examine"
00:13:10.417          }
00:13:10.417        ]
00:13:10.417      }
00:13:10.417    ]
00:13:10.417  }
00:13:10.417  [2024-11-18 05:58:31.304095] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:10.417  [2024-11-18 05:58:31.304281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85947 ]
00:13:10.677  [2024-11-18 05:58:31.449720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:10.677  [2024-11-18 05:58:31.470142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:10.677  [2024-11-18 05:58:31.542893] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input
00:13:10.677  [2024-11-18 05:58:31.543036] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:10.677  [2024-11-18 05:58:31.615970] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:10.936  
00:13:10.936  real	0m0.459s
00:13:10.936  user	0m0.225s
00:13:10.936  sys	0m0.158s
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:10.936   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:13:10.936  ************************************
00:13:10.936  END TEST dd_invalid_skip
00:13:10.937  ************************************
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:10.937  ************************************
00:13:10.937  START TEST dd_invalid_input_count
00:13:10.937  ************************************
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:10.937    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.937    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable
00:13:10.937    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.937    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.937    05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:10.937   05:58:31 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:10.937  {
00:13:10.937    "subsystems": [
00:13:10.937      {
00:13:10.937        "subsystem": "bdev",
00:13:10.937        "config": [
00:13:10.937          {
00:13:10.937            "params": {
00:13:10.937              "block_size": 512,
00:13:10.937              "num_blocks": 512,
00:13:10.937              "name": "malloc0"
00:13:10.937            },
00:13:10.937            "method": "bdev_malloc_create"
00:13:10.937          },
00:13:10.937          {
00:13:10.937            "params": {
00:13:10.937              "block_size": 512,
00:13:10.937              "num_blocks": 512,
00:13:10.937              "name": "malloc1"
00:13:10.937            },
00:13:10.937            "method": "bdev_malloc_create"
00:13:10.937          },
00:13:10.937          {
00:13:10.937            "method": "bdev_wait_for_examine"
00:13:10.937          }
00:13:10.937        ]
00:13:10.937      }
00:13:10.937    ]
00:13:10.937  }
00:13:10.937  [2024-11-18 05:58:31.824530] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:10.937  [2024-11-18 05:58:31.824724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85980 ]
00:13:11.197  [2024-11-18 05:58:31.977355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:11.197  [2024-11-18 05:58:31.997623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:11.197  [2024-11-18 05:58:32.073698] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input
00:13:11.197  [2024-11-18 05:58:32.073805] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:11.197  [2024-11-18 05:58:32.146103] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:11.456  
00:13:11.456  real	0m0.478s
00:13:11.456  user	0m0.255s
00:13:11.456  sys	0m0.143s
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:11.456   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:13:11.456  ************************************
00:13:11.457  END TEST dd_invalid_input_count
00:13:11.457  ************************************
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:11.457  ************************************
00:13:11.457  START TEST dd_invalid_output_count
00:13:11.457  ************************************
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:11.457    05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.457    05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable
00:13:11.457    05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.457    05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.457    05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:11.457   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:11.457  {
00:13:11.457    "subsystems": [
00:13:11.457      {
00:13:11.457        "subsystem": "bdev",
00:13:11.457        "config": [
00:13:11.457          {
00:13:11.457            "params": {
00:13:11.457              "block_size": 512,
00:13:11.457              "num_blocks": 512,
00:13:11.457              "name": "malloc0"
00:13:11.457            },
00:13:11.457            "method": "bdev_malloc_create"
00:13:11.457          },
00:13:11.457          {
00:13:11.457            "method": "bdev_wait_for_examine"
00:13:11.457          }
00:13:11.457        ]
00:13:11.457      }
00:13:11.457    ]
00:13:11.457  }
00:13:11.457  [2024-11-18 05:58:32.353821] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:11.457  [2024-11-18 05:58:32.354012] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86016 ]
00:13:11.716  [2024-11-18 05:58:32.506892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:11.716  [2024-11-18 05:58:32.526891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:11.716  [2024-11-18 05:58:32.591918] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output
00:13:11.716  [2024-11-18 05:58:32.592022] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:11.716  [2024-11-18 05:58:32.664935] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:11.976  
00:13:11.976  real	0m0.465s
00:13:11.976  user	0m0.233s
00:13:11.976  sys	0m0.151s
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:13:11.976  ************************************
00:13:11.976  END TEST dd_invalid_output_count
00:13:11.976  ************************************
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:11.976  ************************************
00:13:11.976  START TEST dd_bs_not_multiple
00:13:11.976  ************************************
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:11.976    05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.976    05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable
00:13:11.976    05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.976    05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.976    05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:11.976   05:58:32 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:11.976  {
00:13:11.976    "subsystems": [
00:13:11.976      {
00:13:11.976        "subsystem": "bdev",
00:13:11.976        "config": [
00:13:11.976          {
00:13:11.976            "params": {
00:13:11.976              "block_size": 512,
00:13:11.976              "num_blocks": 512,
00:13:11.976              "name": "malloc0"
00:13:11.976            },
00:13:11.976            "method": "bdev_malloc_create"
00:13:11.976          },
00:13:11.976          {
00:13:11.976            "params": {
00:13:11.976              "block_size": 512,
00:13:11.976              "num_blocks": 512,
00:13:11.976              "name": "malloc1"
00:13:11.976            },
00:13:11.976            "method": "bdev_malloc_create"
00:13:11.976          },
00:13:11.976          {
00:13:11.976            "method": "bdev_wait_for_examine"
00:13:11.976          }
00:13:11.976        ]
00:13:11.976      }
00:13:11.976    ]
00:13:11.976  }
00:13:11.976  [2024-11-18 05:58:32.875568] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:11.976  [2024-11-18 05:58:32.876193] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86043 ]
00:13:12.236  [2024-11-18 05:58:33.025843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:12.236  [2024-11-18 05:58:33.047370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:12.236  [2024-11-18 05:58:33.124690] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512)
00:13:12.236  [2024-11-18 05:58:33.124769] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:12.236  [2024-11-18 05:58:33.196820] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:12.495  
00:13:12.495  real	0m0.465s
00:13:12.495  user	0m0.218s
00:13:12.495  sys	0m0.170s
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:13:12.495  ************************************
00:13:12.495  END TEST dd_bs_not_multiple
00:13:12.495  ************************************
00:13:12.495  
00:13:12.495  real	0m6.188s
00:13:12.495  user	0m2.783s
00:13:12.495  sys	0m2.647s
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:12.495   05:58:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:12.495  ************************************
00:13:12.495  END TEST spdk_dd_negative
00:13:12.495  ************************************
00:13:12.495  
00:13:12.495  real	0m59.073s
00:13:12.495  user	0m32.101s
00:13:12.495  sys	0m16.181s
00:13:12.495   05:58:33 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:12.495   05:58:33 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:12.495  ************************************
00:13:12.495  END TEST spdk_dd
00:13:12.495  ************************************
00:13:12.495   05:58:33  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:13:12.495   05:58:33  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:13:12.495   05:58:33  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:12.495   05:58:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:12.495   05:58:33  -- common/autotest_common.sh@10 -- # set +x
00:13:12.495  ************************************
00:13:12.495  START TEST blockdev_nvme
00:13:12.495  ************************************
00:13:12.495   05:58:33 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:13:12.754  * Looking for test storage...
00:13:12.754  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:13:12.754    05:58:33 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:12.754     05:58:33 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:13:12.754     05:58:33 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:12.754    05:58:33 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:12.754     05:58:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:12.754    05:58:33 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:12.755    05:58:33 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:13:12.755    05:58:33 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:12.755    05:58:33 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:12.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:12.755  		--rc genhtml_branch_coverage=1
00:13:12.755  		--rc genhtml_function_coverage=1
00:13:12.755  		--rc genhtml_legend=1
00:13:12.755  		--rc geninfo_all_blocks=1
00:13:12.755  		--rc geninfo_unexecuted_blocks=1
00:13:12.755  		
00:13:12.755  		'
00:13:12.755    05:58:33 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:12.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:12.755  		--rc genhtml_branch_coverage=1
00:13:12.755  		--rc genhtml_function_coverage=1
00:13:12.755  		--rc genhtml_legend=1
00:13:12.755  		--rc geninfo_all_blocks=1
00:13:12.755  		--rc geninfo_unexecuted_blocks=1
00:13:12.755  		
00:13:12.755  		'
00:13:12.755    05:58:33 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:12.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:12.755  		--rc genhtml_branch_coverage=1
00:13:12.755  		--rc genhtml_function_coverage=1
00:13:12.755  		--rc genhtml_legend=1
00:13:12.755  		--rc geninfo_all_blocks=1
00:13:12.755  		--rc geninfo_unexecuted_blocks=1
00:13:12.755  		
00:13:12.755  		'
00:13:12.755    05:58:33 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:12.755  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:12.755  		--rc genhtml_branch_coverage=1
00:13:12.755  		--rc genhtml_function_coverage=1
00:13:12.755  		--rc genhtml_legend=1
00:13:12.755  		--rc geninfo_all_blocks=1
00:13:12.755  		--rc geninfo_unexecuted_blocks=1
00:13:12.755  		
00:13:12.755  		'
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:13:12.755    05:58:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:13:12.755    05:58:33 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device=
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek=
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx=
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]]
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]]
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=86142
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 86142
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 86142 ']'
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:12.755   05:58:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:12.755  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:12.755   05:58:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:12.755  [2024-11-18 05:58:33.654017] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:12.755  [2024-11-18 05:58:33.654192] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86142 ]
00:13:13.014  [2024-11-18 05:58:33.800890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.014  [2024-11-18 05:58:33.821274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.014   05:58:33 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:13.014   05:58:33 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:13:13.014   05:58:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:13:13.014   05:58:33 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf
00:13:13.014   05:58:33 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:13:13.014   05:58:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:13:13.014    05:58:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "4481ce95-12aa-4289-b5ff-48c5feff76db"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "4481ce95-12aa-4289-b5ff-48c5feff76db",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:13:13.273    05:58:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:13:13.273   05:58:34 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 86142
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 86142 ']'
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 86142
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:13.273    05:58:34 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86142
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:13.273  killing process with pid 86142
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86142'
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 86142
00:13:13.273   05:58:34 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 86142
00:13:13.841   05:58:34 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:13:13.841   05:58:34 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:13:13.841   05:58:34 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:13:13.841   05:58:34 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:13.841   05:58:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:13.841  ************************************
00:13:13.841  START TEST bdev_hello_world
00:13:13.841  ************************************
00:13:13.841   05:58:34 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:13:13.841  [2024-11-18 05:58:34.598545] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:13.841  [2024-11-18 05:58:34.598773] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86193 ]
00:13:13.841  [2024-11-18 05:58:34.755072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.841  [2024-11-18 05:58:34.777419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:14.100  [2024-11-18 05:58:34.952146] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:13:14.100  [2024-11-18 05:58:34.952240] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:13:14.100  [2024-11-18 05:58:34.952293] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:13:14.100  [2024-11-18 05:58:34.954542] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:13:14.100  [2024-11-18 05:58:34.955106] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:13:14.100  [2024-11-18 05:58:34.955140] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:13:14.100  [2024-11-18 05:58:34.955289] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:13:14.100  
00:13:14.100  [2024-11-18 05:58:34.955318] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:13:14.359  
00:13:14.359  real	0m0.578s
00:13:14.359  user	0m0.336s
00:13:14.359  sys	0m0.143s
00:13:14.359   05:58:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:14.359   05:58:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:13:14.359  ************************************
00:13:14.359  END TEST bdev_hello_world
00:13:14.359  ************************************
00:13:14.359   05:58:35 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:13:14.359   05:58:35 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:14.359   05:58:35 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:14.359   05:58:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:14.359  ************************************
00:13:14.359  START TEST bdev_bounds
00:13:14.359  ************************************
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=86223
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:13:14.359  Process bdevio pid: 86223
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 86223'
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 86223
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 86223 ']'
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:14.359  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:14.359   05:58:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:14.359  [2024-11-18 05:58:35.224995] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:14.359  [2024-11-18 05:58:35.225195] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86223 ]
00:13:14.618  [2024-11-18 05:58:35.374900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:13:14.618  [2024-11-18 05:58:35.398295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:14.618  [2024-11-18 05:58:35.398409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:14.618  [2024-11-18 05:58:35.398490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:15.553   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:13:15.554  I/O targets:
00:13:15.554    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:13:15.554  
00:13:15.554  
00:13:15.554       CUnit - A unit testing framework for C - Version 2.1-3
00:13:15.554       http://cunit.sourceforge.net/
00:13:15.554  
00:13:15.554  
00:13:15.554  Suite: bdevio tests on: Nvme0n1
00:13:15.554    Test: blockdev write read block ...passed
00:13:15.554    Test: blockdev write zeroes read block ...passed
00:13:15.554    Test: blockdev write zeroes read no split ...passed
00:13:15.554    Test: blockdev write zeroes read split ...passed
00:13:15.554    Test: blockdev write zeroes read split partial ...passed
00:13:15.554    Test: blockdev reset ...[2024-11-18 05:58:36.358756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:13:15.554  passed
00:13:15.554    Test: blockdev write read 8 blocks ...[2024-11-18 05:58:36.361024] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:13:15.554  passed
00:13:15.554    Test: blockdev write read size > 128k ...passed
00:13:15.554    Test: blockdev write read invalid size ...passed
00:13:15.554    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:13:15.554    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:13:15.554    Test: blockdev write read max offset ...passed
00:13:15.554    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:13:15.554    Test: blockdev writev readv 8 blocks ...passed
00:13:15.554    Test: blockdev writev readv 30 x 1block ...passed
00:13:15.554    Test: blockdev writev readv block ...passed
00:13:15.554    Test: blockdev writev readv size > 128k ...passed
00:13:15.554    Test: blockdev writev readv size > 128k in two iovs ...passed
00:13:15.554    Test: blockdev comparev and writev ...[2024-11-18 05:58:36.368016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x322a0d000 len:0x1000
00:13:15.554  [2024-11-18 05:58:36.368080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:13:15.554  passed
00:13:15.554    Test: blockdev nvme passthru rw ...passed
00:13:15.554    Test: blockdev nvme passthru vendor specific ...[2024-11-18 05:58:36.369005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:13:15.554  passed
00:13:15.554    Test: blockdev nvme admin passthru ...[2024-11-18 05:58:36.369285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:13:15.554  passed
00:13:15.554    Test: blockdev copy ...passed
00:13:15.554  
00:13:15.554  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:13:15.554                suites      1      1    n/a      0        0
00:13:15.554                 tests     23     23     23      0        0
00:13:15.554               asserts    152    152    152      0      n/a
00:13:15.554  
00:13:15.554  Elapsed time =    0.098 seconds
00:13:15.554  0
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 86223
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 86223 ']'
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 86223
00:13:15.554    05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:15.554    05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86223
00:13:15.554  killing process with pid 86223
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86223'
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 86223
00:13:15.554   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 86223
00:13:15.812  ************************************
00:13:15.812  END TEST bdev_bounds
00:13:15.812  ************************************
00:13:15.812   05:58:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:13:15.812  
00:13:15.812  real	0m1.417s
00:13:15.812  user	0m3.926s
00:13:15.812  sys	0m0.266s
00:13:15.812   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:15.812   05:58:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:15.812   05:58:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:13:15.812   05:58:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:15.812   05:58:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:15.812   05:58:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:15.812  ************************************
00:13:15.812  START TEST bdev_nbd
00:13:15.812  ************************************
00:13:15.812   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:13:15.812    05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:13:15.812   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:13:15.812   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:15.812   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1')
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0')
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1')
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:13:15.813  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=86267
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 86267 /var/tmp/spdk-nbd.sock
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 86267 ']'
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:15.813   05:58:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:13:15.813  [2024-11-18 05:58:36.704101] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:15.813  [2024-11-18 05:58:36.704538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:16.071  [2024-11-18 05:58:36.858035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:16.071  [2024-11-18 05:58:36.878787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:13:17.008    05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:13:17.008    05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:17.008   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:17.009  1+0 records in
00:13:17.009  1+0 records out
00:13:17.009  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569323 s, 7.2 MB/s
00:13:17.009    05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:13:17.009   05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:13:17.009    05:58:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:13:17.268    {
00:13:17.268      "nbd_device": "/dev/nbd0",
00:13:17.268      "bdev_name": "Nvme0n1"
00:13:17.268    }
00:13:17.268  ]'
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:13:17.268    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:13:17.268    {
00:13:17.268      "nbd_device": "/dev/nbd0",
00:13:17.268      "bdev_name": "Nvme0n1"
00:13:17.268    }
00:13:17.268  ]'
00:13:17.268    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:17.268   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:17.526    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:17.526   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:17.526    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:17.526    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.526     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:17.784    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:17.784     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:17.784     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:17.784    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:17.784     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:13:17.784     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:17.784     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:13:17.784    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:13:17.784    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:13:17.784   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:17.785   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:13:18.043  /dev/nbd0
00:13:18.043    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:18.043  1+0 records in
00:13:18.043  1+0 records out
00:13:18.043  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000490675 s, 8.3 MB/s
00:13:18.043    05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:18.043   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:18.044   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:18.044   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:18.044   05:58:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:18.044   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:18.044   05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:18.044    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:18.044    05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:18.044     05:58:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:18.303    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:18.303    {
00:13:18.303      "nbd_device": "/dev/nbd0",
00:13:18.303      "bdev_name": "Nvme0n1"
00:13:18.303    }
00:13:18.303  ]'
00:13:18.303     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:13:18.303    {
00:13:18.303      "nbd_device": "/dev/nbd0",
00:13:18.303      "bdev_name": "Nvme0n1"
00:13:18.303    }
00:13:18.303  ]'
00:13:18.303     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:18.303    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:13:18.303     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:13:18.303     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:18.303    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1
00:13:18.303    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:13:18.303  256+0 records in
00:13:18.303  256+0 records out
00:13:18.303  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00614677 s, 171 MB/s
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:13:18.303   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:13:18.562  256+0 records in
00:13:18.562  256+0 records out
00:13:18.562  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0674547 s, 15.5 MB/s
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:18.562   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:18.821    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:18.821   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:18.821    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:18.821    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:18.821     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:19.080    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:19.080     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:19.080     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:19.080    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:19.080     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:13:19.080     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:19.080     05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:13:19.080    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:13:19.080    05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:13:19.080   05:58:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:13:19.339  malloc_lvol_verify
00:13:19.339   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:13:19.598  f30e3dd3-fcb5-4331-bf67-4aacc4b202e9
00:13:19.598   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:13:19.857  e69ce0cf-76a8-4e9c-abfe-ed496a94c2d9
00:13:19.857   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:13:20.116  /dev/nbd0
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:13:20.116  mke2fs 1.47.0 (5-Feb-2023)
00:13:20.116  
00:13:20.116  Filesystem too small for a journal
00:13:20.116  Discarding device blocks:    0/1024         done                            
00:13:20.116  Creating filesystem with 1024 4k blocks and 1024 inodes
00:13:20.116  
00:13:20.116  Allocating group tables: 0/1   done                            
00:13:20.116  Writing inode tables: 0/1   done                            
00:13:20.116  Writing superblocks and filesystem accounting information: 0/1   done
00:13:20.116  
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:20.116   05:58:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:20.375    05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 86267
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 86267 ']'
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 86267
00:13:20.375    05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:20.375    05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86267
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:20.375  killing process with pid 86267
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86267'
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 86267
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 86267
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:13:20.375  
00:13:20.375  real	0m4.711s
00:13:20.375  user	0m7.346s
00:13:20.375  sys	0m1.071s
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:20.375   05:58:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:13:20.375  ************************************
00:13:20.375  END TEST bdev_nbd
00:13:20.375  ************************************
00:13:20.633   05:58:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:13:20.633   05:58:41 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']'
00:13:20.633  skipping fio tests on NVMe due to multi-ns failures.
00:13:20.633   05:58:41 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:13:20.633   05:58:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:13:20.633   05:58:41 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:13:20.633   05:58:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:13:20.633   05:58:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:20.633   05:58:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:20.633  ************************************
00:13:20.633  START TEST bdev_verify
00:13:20.633  ************************************
00:13:20.633   05:58:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:13:20.634  [2024-11-18 05:58:41.445875] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:20.634  [2024-11-18 05:58:41.446014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86443 ]
00:13:20.634  [2024-11-18 05:58:41.592862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:20.891  [2024-11-18 05:58:41.615599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:20.891  [2024-11-18 05:58:41.615623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:20.891  Running I/O for 5 seconds...
00:13:23.248      21888.00 IOPS,    85.50 MiB/s
[2024-11-18T05:58:45.164Z]     22016.00 IOPS,    86.00 MiB/s
[2024-11-18T05:58:46.100Z]     22101.33 IOPS,    86.33 MiB/s
[2024-11-18T05:58:47.038Z]     21104.00 IOPS,    82.44 MiB/s
[2024-11-18T05:58:47.038Z]     21132.80 IOPS,    82.55 MiB/s
00:13:26.060                                                                                                  Latency(us)
00:13:26.060  
[2024-11-18T05:58:47.038Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:26.060  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:13:26.060  	 Verification LBA range: start 0x0 length 0xa0000
00:13:26.060  	 Nvme0n1             :       5.01   10549.53      41.21       0.00     0.00   12066.82    1199.01   20852.36
00:13:26.060  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:13:26.060  	 Verification LBA range: start 0xa0000 length 0xa0000
00:13:26.060  	 Nvme0n1             :       5.00   10577.99      41.32       0.00     0.00   12034.01     744.73   22043.93
00:13:26.060  
[2024-11-18T05:58:47.038Z]  ===================================================================================================================
00:13:26.060  
[2024-11-18T05:58:47.038Z]  Total                       :              21127.52      82.53       0.00     0.00   12050.41     744.73   22043.93
00:13:26.319  
00:13:26.319  real	0m5.812s
00:13:26.319  user	0m11.096s
00:13:26.319  sys	0m0.138s
00:13:26.319   05:58:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:26.319  ************************************
00:13:26.319  END TEST bdev_verify
00:13:26.319  ************************************
00:13:26.319   05:58:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:13:26.319   05:58:47 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:13:26.319   05:58:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:13:26.319   05:58:47 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:26.319   05:58:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:26.319  ************************************
00:13:26.319  START TEST bdev_verify_big_io
00:13:26.319  ************************************
00:13:26.319   05:58:47 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:13:26.579  [2024-11-18 05:58:47.316449] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:26.579  [2024-11-18 05:58:47.316656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86519 ]
00:13:26.579  [2024-11-18 05:58:47.468167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:26.579  [2024-11-18 05:58:47.491773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:26.579  [2024-11-18 05:58:47.491847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:26.838  Running I/O for 5 seconds...
00:13:29.152       1408.00 IOPS,    88.00 MiB/s
[2024-11-18T05:58:51.068Z]      1504.00 IOPS,    94.00 MiB/s
[2024-11-18T05:58:52.004Z]      1557.33 IOPS,    97.33 MiB/s
[2024-11-18T05:58:52.940Z]      1552.00 IOPS,    97.00 MiB/s
[2024-11-18T05:58:52.940Z]      1541.60 IOPS,    96.35 MiB/s
00:13:31.962                                                                                                  Latency(us)
00:13:31.962  
[2024-11-18T05:58:52.940Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:31.962  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:31.962  	 Verification LBA range: start 0x0 length 0xa000
00:13:31.962  	 Nvme0n1             :       5.12     765.66      47.85       0.00     0.00  163145.36    1087.30  191603.43
00:13:31.962  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:31.962  	 Verification LBA range: start 0xa000 length 0xa000
00:13:31.962  	 Nvme0n1             :       5.12     775.57      48.47       0.00     0.00  160961.35    1266.04  186837.18
00:13:31.962  
[2024-11-18T05:58:52.940Z]  ===================================================================================================================
00:13:31.962  
[2024-11-18T05:58:52.940Z]  Total                       :               1541.23      96.33       0.00     0.00  162046.43    1087.30  191603.43
00:13:32.221  
00:13:32.221  real	0m5.835s
00:13:32.221  user	0m11.084s
00:13:32.221  sys	0m0.171s
00:13:32.221   05:58:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:32.221   05:58:53 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:13:32.221  ************************************
00:13:32.221  END TEST bdev_verify_big_io
00:13:32.221  ************************************
00:13:32.221   05:58:53 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:32.221   05:58:53 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:32.221   05:58:53 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:32.221   05:58:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:32.221  ************************************
00:13:32.221  START TEST bdev_write_zeroes
00:13:32.221  ************************************
00:13:32.221   05:58:53 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:32.480  [2024-11-18 05:58:53.204972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:32.480  [2024-11-18 05:58:53.205168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86606 ]
00:13:32.480  [2024-11-18 05:58:53.359773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:32.480  [2024-11-18 05:58:53.382704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:32.738  Running I/O for 1 seconds...
00:13:33.673      49060.00 IOPS,   191.64 MiB/s
00:13:33.673                                                                                                  Latency(us)
00:13:33.673  
[2024-11-18T05:58:54.651Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:33.673  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:13:33.673  	 Nvme0n1             :       1.01   48953.77     191.23       0.00     0.00    2606.90    1042.62    7000.44
00:13:33.673  
[2024-11-18T05:58:54.651Z]  ===================================================================================================================
00:13:33.673  
[2024-11-18T05:58:54.651Z]  Total                       :              48953.77     191.23       0.00     0.00    2606.90    1042.62    7000.44
00:13:33.932  
00:13:33.932  real	0m1.637s
00:13:33.932  user	0m1.379s
00:13:33.932  sys	0m0.158s
00:13:33.932   05:58:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:33.932   05:58:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:13:33.932  ************************************
00:13:33.932  END TEST bdev_write_zeroes
00:13:33.932  ************************************
00:13:33.932   05:58:54 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:33.932   05:58:54 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:33.932   05:58:54 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:33.932   05:58:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:33.932  ************************************
00:13:33.932  START TEST bdev_json_nonenclosed
00:13:33.933  ************************************
00:13:33.933   05:58:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:33.933  [2024-11-18 05:58:54.894102] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:33.933  [2024-11-18 05:58:54.894287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86643 ]
00:13:34.191  [2024-11-18 05:58:55.053434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:34.191  [2024-11-18 05:58:55.080113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:34.191  [2024-11-18 05:58:55.080299] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:13:34.191  [2024-11-18 05:58:55.080363] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:34.191  [2024-11-18 05:58:55.080400] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:34.451  
00:13:34.451  real	0m0.339s
00:13:34.451  user	0m0.153s
00:13:34.451  sys	0m0.085s
00:13:34.451   05:58:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:34.451   05:58:55 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:13:34.451  ************************************
00:13:34.451  END TEST bdev_json_nonenclosed
00:13:34.451  ************************************
00:13:34.451   05:58:55 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:34.451   05:58:55 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:13:34.451   05:58:55 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:34.451   05:58:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:34.451  ************************************
00:13:34.451  START TEST bdev_json_nonarray
00:13:34.451  ************************************
00:13:34.451   05:58:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:13:34.451  [2024-11-18 05:58:55.287986] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:34.451  [2024-11-18 05:58:55.288653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86663 ]
00:13:34.710  [2024-11-18 05:58:55.451235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:34.710  [2024-11-18 05:58:55.478729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:34.710  [2024-11-18 05:58:55.478897] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:13:34.710  [2024-11-18 05:58:55.478935] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:34.710  [2024-11-18 05:58:55.478979] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:34.710  
00:13:34.710  real	0m0.344s
00:13:34.710  user	0m0.144s
00:13:34.710  sys	0m0.100s
00:13:34.710   05:58:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:34.710   05:58:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:13:34.710  ************************************
00:13:34.710  END TEST bdev_json_nonarray
00:13:34.710  ************************************
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:13:34.710   05:58:55 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:13:34.710  
00:13:34.710  real	0m22.219s
00:13:34.710  user	0m36.662s
00:13:34.710  sys	0m2.848s
00:13:34.710   05:58:55 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:34.710   05:58:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:34.710  ************************************
00:13:34.710  END TEST blockdev_nvme
00:13:34.710  ************************************
00:13:34.710    05:58:55  -- spdk/autotest.sh@209 -- # uname -s
00:13:34.710   05:58:55  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:13:34.710   05:58:55  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:13:34.710   05:58:55  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:34.710   05:58:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:34.710   05:58:55  -- common/autotest_common.sh@10 -- # set +x
00:13:34.710  ************************************
00:13:34.710  START TEST blockdev_nvme_gpt
00:13:34.710  ************************************
00:13:34.710   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:13:35.042  * Looking for test storage...
00:13:35.042  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:13:35.042    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:35.042     05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version
00:13:35.042     05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:35.042    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:35.042    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:35.042     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:13:35.042     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:13:35.043    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:35.043     05:58:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:13:35.043    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:13:35.043    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:35.043    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:35.043    05:58:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:13:35.043    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:35.043    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:35.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:35.043  		--rc genhtml_branch_coverage=1
00:13:35.043  		--rc genhtml_function_coverage=1
00:13:35.043  		--rc genhtml_legend=1
00:13:35.043  		--rc geninfo_all_blocks=1
00:13:35.043  		--rc geninfo_unexecuted_blocks=1
00:13:35.043  		
00:13:35.043  		'
00:13:35.043    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:35.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:35.043  		--rc genhtml_branch_coverage=1
00:13:35.043  		--rc genhtml_function_coverage=1
00:13:35.043  		--rc genhtml_legend=1
00:13:35.043  		--rc geninfo_all_blocks=1
00:13:35.043  		--rc geninfo_unexecuted_blocks=1
00:13:35.043  		
00:13:35.043  		'
00:13:35.043    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:35.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:35.043  		--rc genhtml_branch_coverage=1
00:13:35.043  		--rc genhtml_function_coverage=1
00:13:35.043  		--rc genhtml_legend=1
00:13:35.043  		--rc geninfo_all_blocks=1
00:13:35.043  		--rc geninfo_unexecuted_blocks=1
00:13:35.043  		
00:13:35.043  		'
00:13:35.043    05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:35.043  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:35.043  		--rc genhtml_branch_coverage=1
00:13:35.043  		--rc genhtml_function_coverage=1
00:13:35.043  		--rc genhtml_legend=1
00:13:35.043  		--rc geninfo_all_blocks=1
00:13:35.043  		--rc geninfo_unexecuted_blocks=1
00:13:35.043  		
00:13:35.043  		'
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:13:35.043    05:58:55 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:13:35.043    05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device=
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek=
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx=
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]]
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]]
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=86747
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 86747
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 86747 ']'
00:13:35.043   05:58:55 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:35.043  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:35.043   05:58:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:35.043  [2024-11-18 05:58:55.939735] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:35.043  [2024-11-18 05:58:55.939935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86747 ]
00:13:35.302  [2024-11-18 05:58:56.101439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:35.302  [2024-11-18 05:58:56.127470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:35.870   05:58:56 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:35.870   05:58:56 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:13:35.870   05:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:13:35.870   05:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf
00:13:35.870   05:58:56 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:13:36.439  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:13:36.439  Waiting for block devices as requested
00:13:36.439  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:13:36.439   05:58:57 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1')
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:13:36.439    05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:13:36.439  BYT;
00:13:36.439  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:13:36.439  BYT;
00:13:36.439  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:13:36.439   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:13:36.698    05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:13:36.698     05:58:57 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:13:36.698   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:13:36.698    05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:13:36.698     05:58:57 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:13:36.698    05:58:57 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:13:36.698   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:13:36.698   05:58:57 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:13:37.634  The operation has completed successfully.
00:13:37.634   05:58:58 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:13:39.011  The operation has completed successfully.
00:13:39.011   05:58:59 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:13:39.011  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:13:39.270  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:13:39.529   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:13:39.529   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.529   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.529  []
00:13:39.529   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.529   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:13:39.529   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:13:39.529   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:13:39.529    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:13:39.788   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat
00:13:39.788    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:13:39.788    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:13:39.788    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:39.788    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:39.788   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:13:39.789    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:13:39.789    05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name
00:13:39.789   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:13:39.789   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1
00:13:39.789   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:13:39.789   05:59:00 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 86747
00:13:39.789   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 86747 ']'
00:13:39.789   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 86747
00:13:39.789    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:13:39.789   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:39.789    05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86747
00:13:40.047   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:40.047   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:40.047  killing process with pid 86747
00:13:40.047   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86747'
00:13:40.047   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 86747
00:13:40.047   05:59:00 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 86747
00:13:40.306   05:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:13:40.306   05:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:13:40.306   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:13:40.306   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:40.306   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:40.306  ************************************
00:13:40.306  START TEST bdev_hello_world
00:13:40.306  ************************************
00:13:40.306   05:59:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:13:40.306  [2024-11-18 05:59:01.151852] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:40.306  [2024-11-18 05:59:01.152046] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87124 ]
00:13:40.565  [2024-11-18 05:59:01.302311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:40.565  [2024-11-18 05:59:01.324976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:40.565  [2024-11-18 05:59:01.503288] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:13:40.565  [2024-11-18 05:59:01.503370] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:13:40.565  [2024-11-18 05:59:01.503424] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:13:40.565  [2024-11-18 05:59:01.505949] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:13:40.565  [2024-11-18 05:59:01.506512] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:13:40.565  [2024-11-18 05:59:01.506551] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:13:40.565  [2024-11-18 05:59:01.506832] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:13:40.565  
00:13:40.565  [2024-11-18 05:59:01.506889] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:13:40.825  
00:13:40.825  real	0m0.577s
00:13:40.825  user	0m0.339s
00:13:40.825  sys	0m0.138s
00:13:40.825  ************************************
00:13:40.825  END TEST bdev_hello_world
00:13:40.825  ************************************
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:13:40.825   05:59:01 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:13:40.825   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:40.825   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:40.825   05:59:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:40.825  ************************************
00:13:40.825  START TEST bdev_bounds
00:13:40.825  ************************************
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=87150
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:13:40.825  Process bdevio pid: 87150
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 87150'
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 87150
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 87150 ']'
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:40.825  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:40.825   05:59:01 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:40.825  [2024-11-18 05:59:01.779664] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:40.825  [2024-11-18 05:59:01.779891] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87150 ]
00:13:41.084  [2024-11-18 05:59:01.933361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:13:41.084  [2024-11-18 05:59:01.956371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:41.084  [2024-11-18 05:59:01.956436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:41.084  [2024-11-18 05:59:01.956534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:42.023   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:42.023   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:13:42.023   05:59:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:13:42.023  I/O targets:
00:13:42.023    Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:13:42.023    Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:13:42.023  
00:13:42.023  
00:13:42.023       CUnit - A unit testing framework for C - Version 2.1-3
00:13:42.023       http://cunit.sourceforge.net/
00:13:42.023  
00:13:42.023  
00:13:42.023  Suite: bdevio tests on: Nvme0n1p2
00:13:42.023    Test: blockdev write read block ...passed
00:13:42.023    Test: blockdev write zeroes read block ...passed
00:13:42.023    Test: blockdev write zeroes read no split ...passed
00:13:42.023    Test: blockdev write zeroes read split ...passed
00:13:42.023    Test: blockdev write zeroes read split partial ...passed
00:13:42.023    Test: blockdev reset ...[2024-11-18 05:59:02.861496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:13:42.024  [2024-11-18 05:59:02.863962] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:13:42.024  passed
00:13:42.024    Test: blockdev write read 8 blocks ...passed
00:13:42.024    Test: blockdev write read size > 128k ...passed
00:13:42.024    Test: blockdev write read invalid size ...passed
00:13:42.024    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:13:42.024    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:13:42.024    Test: blockdev write read max offset ...passed
00:13:42.024    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:13:42.024    Test: blockdev writev readv 8 blocks ...passed
00:13:42.024    Test: blockdev writev readv 30 x 1block ...passed
00:13:42.024    Test: blockdev writev readv block ...passed
00:13:42.024    Test: blockdev writev readv size > 128k ...passed
00:13:42.024    Test: blockdev writev readv size > 128k in two iovs ...passed
00:13:42.024    Test: blockdev comparev and writev ...[2024-11-18 05:59:02.872026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x32b40d000 len:0x1000
00:13:42.024  [2024-11-18 05:59:02.872112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:13:42.024  passed
00:13:42.024    Test: blockdev nvme passthru rw ...passed
00:13:42.024    Test: blockdev nvme passthru vendor specific ...passed
00:13:42.024    Test: blockdev nvme admin passthru ...passed
00:13:42.024    Test: blockdev copy ...passed
00:13:42.024  Suite: bdevio tests on: Nvme0n1p1
00:13:42.024    Test: blockdev write read block ...passed
00:13:42.024    Test: blockdev write zeroes read block ...passed
00:13:42.024    Test: blockdev write zeroes read no split ...passed
00:13:42.024    Test: blockdev write zeroes read split ...passed
00:13:42.024    Test: blockdev write zeroes read split partial ...passed
00:13:42.024    Test: blockdev reset ...[2024-11-18 05:59:02.885495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:13:42.024  [2024-11-18 05:59:02.887569] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:13:42.024  passed
00:13:42.024    Test: blockdev write read 8 blocks ...passed
00:13:42.024    Test: blockdev write read size > 128k ...passed
00:13:42.024    Test: blockdev write read invalid size ...passed
00:13:42.024    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:13:42.024    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:13:42.024    Test: blockdev write read max offset ...passed
00:13:42.024    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:13:42.024    Test: blockdev writev readv 8 blocks ...passed
00:13:42.024    Test: blockdev writev readv 30 x 1block ...passed
00:13:42.024    Test: blockdev writev readv block ...passed
00:13:42.024    Test: blockdev writev readv size > 128k ...passed
00:13:42.024    Test: blockdev writev readv size > 128k in two iovs ...passed
00:13:42.024    Test: blockdev comparev and writev ...[2024-11-18 05:59:02.894723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x32b409000 len:0x1000
00:13:42.024  [2024-11-18 05:59:02.894844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:13:42.024  passed
00:13:42.024    Test: blockdev nvme passthru rw ...passed
00:13:42.024    Test: blockdev nvme passthru vendor specific ...passed
00:13:42.024    Test: blockdev nvme admin passthru ...passed
00:13:42.024    Test: blockdev copy ...passed
00:13:42.024  
00:13:42.024  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:13:42.024                suites      2      2    n/a      0        0
00:13:42.024                 tests     46     46     46      0        0
00:13:42.024               asserts    284    284    284      0      n/a
00:13:42.024  
00:13:42.024  Elapsed time =    0.113 seconds
00:13:42.024  0
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 87150
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 87150 ']'
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 87150
00:13:42.024    05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:42.024    05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87150
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87150'
00:13:42.024  killing process with pid 87150
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 87150
00:13:42.024   05:59:02 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 87150
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:13:42.283  
00:13:42.283  real	0m1.390s
00:13:42.283  user	0m3.757s
00:13:42.283  sys	0m0.284s
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:42.283  ************************************
00:13:42.283  END TEST bdev_bounds
00:13:42.283  ************************************
00:13:42.283   05:59:03 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:13:42.283   05:59:03 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:42.283   05:59:03 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:42.283   05:59:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:42.283  ************************************
00:13:42.283  START TEST bdev_nbd
00:13:42.283  ************************************
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:13:42.283    05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=87197
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:13:42.283   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 87197 /var/tmp/spdk-nbd.sock
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 87197 ']'
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:42.284  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:42.284   05:59:03 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:13:42.284  [2024-11-18 05:59:03.230827] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:42.284  [2024-11-18 05:59:03.230996] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:42.543  [2024-11-18 05:59:03.386640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:42.543  [2024-11-18 05:59:03.407979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:13:43.480    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:13:43.480    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:43.480  1+0 records in
00:13:43.480  1+0 records out
00:13:43.480  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000343273 s, 11.9 MB/s
00:13:43.480    05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:13:43.480   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:13:43.480    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:13:43.739    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:43.739   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:43.739  1+0 records in
00:13:43.739  1+0 records out
00:13:43.739  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500631 s, 8.2 MB/s
00:13:43.739    05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:13:43.740   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:13:43.740    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:13:43.999    {
00:13:43.999      "nbd_device": "/dev/nbd0",
00:13:43.999      "bdev_name": "Nvme0n1p1"
00:13:43.999    },
00:13:43.999    {
00:13:43.999      "nbd_device": "/dev/nbd1",
00:13:43.999      "bdev_name": "Nvme0n1p2"
00:13:43.999    }
00:13:43.999  ]'
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:13:43.999    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:13:43.999    {
00:13:43.999      "nbd_device": "/dev/nbd0",
00:13:43.999      "bdev_name": "Nvme0n1p1"
00:13:43.999    },
00:13:43.999    {
00:13:43.999      "nbd_device": "/dev/nbd1",
00:13:43.999      "bdev_name": "Nvme0n1p2"
00:13:43.999    }
00:13:43.999  ]'
00:13:43.999    05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:43.999   05:59:04 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:44.258    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:44.258   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:13:44.517    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:44.517   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:44.517    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:44.517    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:44.517     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:44.776    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:44.776     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:44.776     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:44.776    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:44.776     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:13:44.776     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:44.776     05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:13:44.776    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:13:44.776    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:13:44.776   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:44.777   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:13:45.036  /dev/nbd0
00:13:45.036    05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:45.036  1+0 records in
00:13:45.036  1+0 records out
00:13:45.036  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487267 s, 8.4 MB/s
00:13:45.036    05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:45.036   05:59:05 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:13:45.295  /dev/nbd1
00:13:45.295    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:45.295  1+0 records in
00:13:45.295  1+0 records out
00:13:45.295  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00277809 s, 1.5 MB/s
00:13:45.295    05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:13:45.295   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:13:45.295    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:45.295    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:45.295     05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:45.554    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:13:45.554    {
00:13:45.554      "nbd_device": "/dev/nbd0",
00:13:45.554      "bdev_name": "Nvme0n1p1"
00:13:45.554    },
00:13:45.554    {
00:13:45.554      "nbd_device": "/dev/nbd1",
00:13:45.554      "bdev_name": "Nvme0n1p2"
00:13:45.554    }
00:13:45.554  ]'
00:13:45.554     05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:13:45.554    {
00:13:45.554      "nbd_device": "/dev/nbd0",
00:13:45.554      "bdev_name": "Nvme0n1p1"
00:13:45.554    },
00:13:45.554    {
00:13:45.554      "nbd_device": "/dev/nbd1",
00:13:45.554      "bdev_name": "Nvme0n1p2"
00:13:45.554    }
00:13:45.554  ]'
00:13:45.554     05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:45.554    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:13:45.554  /dev/nbd1'
00:13:45.554     05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:13:45.554  /dev/nbd1'
00:13:45.555     05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:45.555    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2
00:13:45.555    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:13:45.555  256+0 records in
00:13:45.555  256+0 records out
00:13:45.555  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755105 s, 139 MB/s
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:13:45.555   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:13:45.814  256+0 records in
00:13:45.814  256+0 records out
00:13:45.814  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.101402 s, 10.3 MB/s
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:13:45.814  256+0 records in
00:13:45.814  256+0 records out
00:13:45.814  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0963569 s, 10.9 MB/s
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:45.814   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:46.073    05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:46.073   05:59:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:13:46.332    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:46.332   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:46.332    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:46.332    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:46.332     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:46.592    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:46.592     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:46.592     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:46.592    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:46.592     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:13:46.592     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:46.592     05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:13:46.592    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:13:46.592    05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:13:46.592   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:13:46.851  malloc_lvol_verify
00:13:46.851   05:59:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:13:47.110  e44ab556-23e2-47b8-ab73-1a4cf94b48b5
00:13:47.110   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:13:47.369  c3c165d7-6cf7-448b-8c23-cd1c86973780
00:13:47.369   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:13:47.629  /dev/nbd0
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:13:47.629  mke2fs 1.47.0 (5-Feb-2023)
00:13:47.629  
00:13:47.629  Filesystem too small for a journal
00:13:47.629  Discarding device blocks:    0/1024         done                            
00:13:47.629  Creating filesystem with 1024 4k blocks and 1024 inodes
00:13:47.629  
00:13:47.629  Allocating group tables: 0/1   done                            
00:13:47.629  Writing inode tables: 0/1   done                            
00:13:47.629  Writing superblocks and filesystem accounting information: 0/1   done
00:13:47.629  
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:47.629   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:47.889    05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 87197
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 87197 ']'
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 87197
00:13:47.889    05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:47.889    05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87197
00:13:47.889  killing process with pid 87197
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87197'
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 87197
00:13:47.889   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 87197
00:13:48.148   05:59:08 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:13:48.148  
00:13:48.148  real	0m5.785s
00:13:48.148  user	0m8.832s
00:13:48.148  sys	0m1.545s
00:13:48.148   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:48.148   05:59:08 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:13:48.148  ************************************
00:13:48.148  END TEST bdev_nbd
00:13:48.148  ************************************
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']'
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']'
00:13:48.148  skipping fio tests on NVMe due to multi-ns failures.
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:13:48.148   05:59:08 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:13:48.148   05:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:13:48.148   05:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:48.148   05:59:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:48.148  ************************************
00:13:48.148  START TEST bdev_verify
00:13:48.148  ************************************
00:13:48.148   05:59:09 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:13:48.148  [2024-11-18 05:59:09.058289] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:48.148  [2024-11-18 05:59:09.058498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87431 ]
00:13:48.406  [2024-11-18 05:59:09.212873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:48.406  [2024-11-18 05:59:09.234297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:48.406  [2024-11-18 05:59:09.234390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:48.664  Running I/O for 5 seconds...
00:13:50.538      19712.00 IOPS,    77.00 MiB/s
[2024-11-18T05:59:12.487Z]     19456.00 IOPS,    76.00 MiB/s
[2024-11-18T05:59:13.875Z]     20096.00 IOPS,    78.50 MiB/s
[2024-11-18T05:59:14.813Z]     20384.00 IOPS,    79.62 MiB/s
[2024-11-18T05:59:14.813Z]     20531.20 IOPS,    80.20 MiB/s
00:13:53.835                                                                                                  Latency(us)
00:13:53.835  
[2024-11-18T05:59:14.813Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:53.835  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:13:53.835  	 Verification LBA range: start 0x0 length 0x4ff80
00:13:53.835  	 Nvme0n1p1           :       5.03    5118.03      19.99       0.00     0.00   24933.27    5421.61   29074.15
00:13:53.835  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:13:53.835  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:13:53.835  	 Nvme0n1p1           :       5.03    5116.37      19.99       0.00     0.00   24932.53    5570.56   30146.56
00:13:53.835  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:13:53.835  	 Verification LBA range: start 0x0 length 0x4ff7f
00:13:53.836  	 Nvme0n1p2           :       5.03    5113.69      19.98       0.00     0.00   24892.20    3500.22   29312.47
00:13:53.836  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:13:53.836  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:13:53.836  	 Nvme0n1p2           :       5.03    5114.27      19.98       0.00     0.00   24888.46    4259.84   24665.37
00:13:53.836  
[2024-11-18T05:59:14.814Z]  ===================================================================================================================
00:13:53.836  
[2024-11-18T05:59:14.814Z]  Total                       :              20462.35      79.93       0.00     0.00   24911.61    3500.22   30146.56
00:13:54.095  
00:13:54.095  real	0m5.897s
00:13:54.095  user	0m11.185s
00:13:54.095  sys	0m0.176s
00:13:54.095   05:59:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:54.095   05:59:14 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:13:54.095  ************************************
00:13:54.095  END TEST bdev_verify
00:13:54.095  ************************************
00:13:54.095   05:59:14 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:13:54.095   05:59:14 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:13:54.095   05:59:14 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:54.095   05:59:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:13:54.095  ************************************
00:13:54.095  START TEST bdev_verify_big_io
00:13:54.095  ************************************
00:13:54.095   05:59:14 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:13:54.095  [2024-11-18 05:59:15.013648] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:13:54.095  [2024-11-18 05:59:15.013878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87507 ]
00:13:54.354  [2024-11-18 05:59:15.173661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:13:54.354  [2024-11-18 05:59:15.200067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:54.354  [2024-11-18 05:59:15.200112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:54.612  Running I/O for 5 seconds...
00:13:56.925       1408.00 IOPS,    88.00 MiB/s
[2024-11-18T05:59:19.281Z]      1664.50 IOPS,   104.03 MiB/s
[2024-11-18T05:59:20.215Z]      1770.33 IOPS,   110.65 MiB/s
[2024-11-18T05:59:20.783Z]      1698.25 IOPS,   106.14 MiB/s
00:13:59.805                                                                                                  Latency(us)
00:13:59.805  
[2024-11-18T05:59:20.783Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:13:59.805  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:59.805  	 Verification LBA range: start 0x0 length 0x4ff8
00:13:59.805  	 Nvme0n1p1           :       5.21     392.90      24.56       0.00     0.00  319551.10    4974.78  350796.33
00:13:59.805  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:59.805  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:13:59.805  	 Nvme0n1p1           :       5.11     425.66      26.60       0.00     0.00  293208.73    4855.62  337450.82
00:13:59.805  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:13:59.805  	 Verification LBA range: start 0x0 length 0x4ff7
00:13:59.805  	 Nvme0n1p2           :       5.21     392.55      24.53       0.00     0.00  309747.29    3217.22  354609.34
00:13:59.805  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:13:59.805  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:13:59.805  	 Nvme0n1p2           :       5.19     443.91      27.74       0.00     0.00  273613.28    1608.61  324105.31
00:13:59.805  
[2024-11-18T05:59:20.783Z]  ===================================================================================================================
00:13:59.805  
[2024-11-18T05:59:20.783Z]  Total                       :               1655.03     103.44       0.00     0.00  298183.15    1608.61  354609.34
00:14:00.372  
00:14:00.372  real	0m6.153s
00:14:00.372  user	0m11.695s
00:14:00.372  sys	0m0.185s
00:14:00.372  ************************************
00:14:00.372  END TEST bdev_verify_big_io
00:14:00.372  ************************************
00:14:00.372   05:59:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:00.372   05:59:21 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:14:00.372   05:59:21 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:00.372   05:59:21 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:00.372   05:59:21 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:00.372   05:59:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:00.372  ************************************
00:14:00.372  START TEST bdev_write_zeroes
00:14:00.372  ************************************
00:14:00.372   05:59:21 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:00.372  [2024-11-18 05:59:21.209458] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:00.372  [2024-11-18 05:59:21.209653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87595 ]
00:14:00.631  [2024-11-18 05:59:21.355078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:00.631  [2024-11-18 05:59:21.377834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:00.631  Running I/O for 1 seconds...
00:14:02.006      44416.00 IOPS,   173.50 MiB/s
00:14:02.006                                                                                                  Latency(us)
00:14:02.006  
[2024-11-18T05:59:22.984Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:02.006  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:14:02.006  	 Nvme0n1p1           :       1.01   22188.52      86.67       0.00     0.00    5753.13    4140.68   10128.29
00:14:02.006  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:14:02.006  	 Nvme0n1p2           :       1.01   22138.19      86.48       0.00     0.00    5755.01    4081.11    9830.40
00:14:02.006  
[2024-11-18T05:59:22.984Z]  ===================================================================================================================
00:14:02.006  
[2024-11-18T05:59:22.984Z]  Total                       :              44326.71     173.15       0.00     0.00    5754.07    4081.11   10128.29
00:14:02.006  
00:14:02.006  real	0m1.637s
00:14:02.006  user	0m1.401s
00:14:02.006  sys	0m0.135s
00:14:02.006  ************************************
00:14:02.006  END TEST bdev_write_zeroes
00:14:02.006  ************************************
00:14:02.006   05:59:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.006   05:59:22 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:14:02.006   05:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:02.006   05:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:02.006   05:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:02.006   05:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:02.006  ************************************
00:14:02.006  START TEST bdev_json_nonenclosed
00:14:02.006  ************************************
00:14:02.006   05:59:22 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:02.006  [2024-11-18 05:59:22.909138] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:02.007  [2024-11-18 05:59:22.909333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87631 ]
00:14:02.266  [2024-11-18 05:59:23.068988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:02.266  [2024-11-18 05:59:23.094001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:02.266  [2024-11-18 05:59:23.094124] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:14:02.266  [2024-11-18 05:59:23.094158] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:02.266  [2024-11-18 05:59:23.094181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:02.266  ************************************
00:14:02.266  END TEST bdev_json_nonenclosed
00:14:02.266  ************************************
00:14:02.266  
00:14:02.266  real	0m0.345s
00:14:02.266  user	0m0.154s
00:14:02.266  sys	0m0.089s
00:14:02.266   05:59:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.266   05:59:23 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:14:02.266   05:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:02.266   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:02.266   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:02.266   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:02.525  ************************************
00:14:02.525  START TEST bdev_json_nonarray
00:14:02.525  ************************************
00:14:02.525   05:59:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:02.525  [2024-11-18 05:59:23.303665] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:02.525  [2024-11-18 05:59:23.303875] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87657 ]
00:14:02.525  [2024-11-18 05:59:23.462088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:02.525  [2024-11-18 05:59:23.489549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:02.525  [2024-11-18 05:59:23.489703] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:14:02.525  [2024-11-18 05:59:23.489750] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:02.525  [2024-11-18 05:59:23.489822] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:02.784  
00:14:02.784  real	0m0.347s
00:14:02.784  user	0m0.147s
00:14:02.784  sys	0m0.100s
00:14:02.784  ************************************
00:14:02.784  END TEST bdev_json_nonarray
00:14:02.784  ************************************
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:14:02.784   05:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]]
00:14:02.784   05:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]]
00:14:02.784   05:59:23 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:14:02.784   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:02.784   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:02.784   05:59:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:02.784  ************************************
00:14:02.784  START TEST bdev_gpt_uuid
00:14:02.784  ************************************
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=87681
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 87681
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 87681 ']'
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:02.784  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:02.784   05:59:23 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:02.784  [2024-11-18 05:59:23.743417] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:02.784  [2024-11-18 05:59:23.744012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87681 ]
00:14:03.043  [2024-11-18 05:59:23.909408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:03.043  [2024-11-18 05:59:23.935530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:03.981  Some configs were skipped because the RPC state that can call them passed over.
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[
00:14:03.981  {
00:14:03.981  "name": "Nvme0n1p1",
00:14:03.981  "aliases": [
00:14:03.981  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:14:03.981  ],
00:14:03.981  "product_name": "GPT Disk",
00:14:03.981  "block_size": 4096,
00:14:03.981  "num_blocks": 655104,
00:14:03.981  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:14:03.981  "assigned_rate_limits": {
00:14:03.981  "rw_ios_per_sec": 0,
00:14:03.981  "rw_mbytes_per_sec": 0,
00:14:03.981  "r_mbytes_per_sec": 0,
00:14:03.981  "w_mbytes_per_sec": 0
00:14:03.981  },
00:14:03.981  "claimed": false,
00:14:03.981  "zoned": false,
00:14:03.981  "supported_io_types": {
00:14:03.981  "read": true,
00:14:03.981  "write": true,
00:14:03.981  "unmap": true,
00:14:03.981  "flush": true,
00:14:03.981  "reset": true,
00:14:03.981  "nvme_admin": false,
00:14:03.981  "nvme_io": false,
00:14:03.981  "nvme_io_md": false,
00:14:03.981  "write_zeroes": true,
00:14:03.981  "zcopy": false,
00:14:03.981  "get_zone_info": false,
00:14:03.981  "zone_management": false,
00:14:03.981  "zone_append": false,
00:14:03.981  "compare": true,
00:14:03.981  "compare_and_write": false,
00:14:03.981  "abort": true,
00:14:03.981  "seek_hole": false,
00:14:03.981  "seek_data": false,
00:14:03.981  "copy": true,
00:14:03.981  "nvme_iov_md": false
00:14:03.981  },
00:14:03.981  "driver_specific": {
00:14:03.981  "gpt": {
00:14:03.981  "base_bdev": "Nvme0n1",
00:14:03.981  "offset_blocks": 256,
00:14:03.981  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:14:03.981  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:14:03.981  "partition_name": "SPDK_TEST_first"
00:14:03.981  }
00:14:03.981  }
00:14:03.981  }
00:14:03.981  ]'
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]]
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]'
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:03.981    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:03.981   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[
00:14:03.981  {
00:14:03.981  "name": "Nvme0n1p2",
00:14:03.981  "aliases": [
00:14:03.981  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:14:03.981  ],
00:14:03.981  "product_name": "GPT Disk",
00:14:03.981  "block_size": 4096,
00:14:03.981  "num_blocks": 655103,
00:14:03.981  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:14:03.981  "assigned_rate_limits": {
00:14:03.981  "rw_ios_per_sec": 0,
00:14:03.981  "rw_mbytes_per_sec": 0,
00:14:03.981  "r_mbytes_per_sec": 0,
00:14:03.981  "w_mbytes_per_sec": 0
00:14:03.981  },
00:14:03.981  "claimed": false,
00:14:03.981  "zoned": false,
00:14:03.981  "supported_io_types": {
00:14:03.981  "read": true,
00:14:03.982  "write": true,
00:14:03.982  "unmap": true,
00:14:03.982  "flush": true,
00:14:03.982  "reset": true,
00:14:03.982  "nvme_admin": false,
00:14:03.982  "nvme_io": false,
00:14:03.982  "nvme_io_md": false,
00:14:03.982  "write_zeroes": true,
00:14:03.982  "zcopy": false,
00:14:03.982  "get_zone_info": false,
00:14:03.982  "zone_management": false,
00:14:03.982  "zone_append": false,
00:14:03.982  "compare": true,
00:14:03.982  "compare_and_write": false,
00:14:03.982  "abort": true,
00:14:03.982  "seek_hole": false,
00:14:03.982  "seek_data": false,
00:14:03.982  "copy": true,
00:14:03.982  "nvme_iov_md": false
00:14:03.982  },
00:14:03.982  "driver_specific": {
00:14:03.982  "gpt": {
00:14:03.982  "base_bdev": "Nvme0n1",
00:14:03.982  "offset_blocks": 655360,
00:14:03.982  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:14:03.982  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:14:03.982  "partition_name": "SPDK_TEST_second"
00:14:03.982  }
00:14:03.982  }
00:14:03.982  }
00:14:03.982  ]'
00:14:03.982    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]]
00:14:03.982    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]'
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:14:03.982    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 87681
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 87681 ']'
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 87681
00:14:03.982    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:14:03.982   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:03.982    05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87681
00:14:04.241   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:04.241  killing process with pid 87681
00:14:04.241   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:04.241   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87681'
00:14:04.241   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 87681
00:14:04.241   05:59:24 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 87681
00:14:04.500  
00:14:04.500  real	0m1.610s
00:14:04.500  user	0m1.767s
00:14:04.500  sys	0m0.402s
00:14:04.500   05:59:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:04.500   05:59:25 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:04.500  ************************************
00:14:04.500  END TEST bdev_gpt_uuid
00:14:04.500  ************************************
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]]
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:14:04.500   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:04.759  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:04.759  Waiting for block devices as requested
00:14:04.759  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:05.018   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:14:05.018   05:59:25 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:14:05.290  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:14:05.290  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:14:05.290  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:14:05.290  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:14:05.290   05:59:26 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:14:05.290  
00:14:05.290  real	0m30.354s
00:14:05.290  user	0m45.395s
00:14:05.290  sys	0m5.424s
00:14:05.290   05:59:26 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:05.290   05:59:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:05.290  ************************************
00:14:05.290  END TEST blockdev_nvme_gpt
00:14:05.290  ************************************
00:14:05.290   05:59:26  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:14:05.290   05:59:26  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:05.290   05:59:26  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:05.290   05:59:26  -- common/autotest_common.sh@10 -- # set +x
00:14:05.290  ************************************
00:14:05.290  START TEST nvme
00:14:05.290  ************************************
00:14:05.290   05:59:26 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:14:05.290  * Looking for test storage...
00:14:05.290  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:05.290     05:59:26 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:05.290     05:59:26 nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:05.290    05:59:26 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:05.290    05:59:26 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:14:05.290    05:59:26 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:14:05.290    05:59:26 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:14:05.290    05:59:26 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:14:05.290    05:59:26 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:14:05.290    05:59:26 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:05.290    05:59:26 nvme -- scripts/common.sh@344 -- # case "$op" in
00:14:05.290    05:59:26 nvme -- scripts/common.sh@345 -- # : 1
00:14:05.290    05:59:26 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:05.290    05:59:26 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:05.290     05:59:26 nvme -- scripts/common.sh@365 -- # decimal 1
00:14:05.290     05:59:26 nvme -- scripts/common.sh@353 -- # local d=1
00:14:05.290     05:59:26 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:05.290     05:59:26 nvme -- scripts/common.sh@355 -- # echo 1
00:14:05.290    05:59:26 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:14:05.290     05:59:26 nvme -- scripts/common.sh@366 -- # decimal 2
00:14:05.290     05:59:26 nvme -- scripts/common.sh@353 -- # local d=2
00:14:05.290     05:59:26 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:05.290     05:59:26 nvme -- scripts/common.sh@355 -- # echo 2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:14:05.290    05:59:26 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:05.290    05:59:26 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:05.290    05:59:26 nvme -- scripts/common.sh@368 -- # return 0
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:05.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.290  		--rc genhtml_branch_coverage=1
00:14:05.290  		--rc genhtml_function_coverage=1
00:14:05.290  		--rc genhtml_legend=1
00:14:05.290  		--rc geninfo_all_blocks=1
00:14:05.290  		--rc geninfo_unexecuted_blocks=1
00:14:05.290  		
00:14:05.290  		'
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:05.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.290  		--rc genhtml_branch_coverage=1
00:14:05.290  		--rc genhtml_function_coverage=1
00:14:05.290  		--rc genhtml_legend=1
00:14:05.290  		--rc geninfo_all_blocks=1
00:14:05.290  		--rc geninfo_unexecuted_blocks=1
00:14:05.290  		
00:14:05.290  		'
00:14:05.290    05:59:26 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:05.290  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.290  		--rc genhtml_branch_coverage=1
00:14:05.290  		--rc genhtml_function_coverage=1
00:14:05.290  		--rc genhtml_legend=1
00:14:05.290  		--rc geninfo_all_blocks=1
00:14:05.290  		--rc geninfo_unexecuted_blocks=1
00:14:05.290  		
00:14:05.291  		'
00:14:05.291    05:59:26 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:05.291  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:05.291  		--rc genhtml_branch_coverage=1
00:14:05.291  		--rc genhtml_function_coverage=1
00:14:05.291  		--rc genhtml_legend=1
00:14:05.291  		--rc geninfo_all_blocks=1
00:14:05.291  		--rc geninfo_unexecuted_blocks=1
00:14:05.291  		
00:14:05.291  		'
00:14:05.291   05:59:26 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:05.857  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:05.857  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:14:06.425    05:59:27 nvme -- nvme/nvme.sh@79 -- # uname
00:14:06.425   05:59:27 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:14:06.425   05:59:27 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:14:06.425   05:59:27 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1075 -- # stubpid=88045
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:14:06.425  Waiting for stub to ready for secondary processes...
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/88045 ]]
00:14:06.425   05:59:27 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:14:06.425  [2024-11-18 05:59:27.367188] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:06.425  [2024-11-18 05:59:27.367389] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:14:07.364  [2024-11-18 05:59:28.160909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:07.364  [2024-11-18 05:59:28.180948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:07.364  [2024-11-18 05:59:28.180954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:07.364  [2024-11-18 05:59:28.181033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:07.364  [2024-11-18 05:59:28.189193] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:14:07.364  [2024-11-18 05:59:28.189271] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:07.364  [2024-11-18 05:59:28.199315] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:07.364  [2024-11-18 05:59:28.199695] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:07.364   05:59:28 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:14:07.364  done.
00:14:07.364   05:59:28 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:14:07.364   05:59:28 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:14:07.364   05:59:28 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:14:07.364   05:59:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:07.364   05:59:28 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:07.623  ************************************
00:14:07.623  START TEST nvme_reset
00:14:07.623  ************************************
00:14:07.623   05:59:28 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:14:07.623  Initializing NVMe Controllers
00:14:07.623  Skipping QEMU NVMe SSD at 0000:00:10.0
00:14:07.623  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:14:07.623  
00:14:07.623  real	0m0.249s
00:14:07.623  user	0m0.106s
00:14:07.623  sys	0m0.102s
00:14:07.623   05:59:28 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:07.623   05:59:28 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:14:07.623  ************************************
00:14:07.623  END TEST nvme_reset
00:14:07.623  ************************************
00:14:07.882   05:59:28 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:14:07.882   05:59:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:07.882   05:59:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:07.882   05:59:28 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:07.882  ************************************
00:14:07.882  START TEST nvme_identify
00:14:07.882  ************************************
00:14:07.882   05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:14:07.882   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:14:07.882   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:14:07.882   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:14:07.882    05:59:28 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:14:07.882    05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:07.882    05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:14:07.882    05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:07.882     05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:07.882     05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:07.882    05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:07.882    05:59:28 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:07.882   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:14:08.142  =====================================================
00:14:08.142  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:08.142  =====================================================
00:14:08.142  Controller Capabilities/Features
00:14:08.142  ================================
00:14:08.142  Vendor ID:                             1b36
00:14:08.142  Subsystem Vendor ID:                   1af4
00:14:08.142  Serial Number:                         12340
00:14:08.142  Model Number:                          QEMU NVMe Ctrl
00:14:08.142  Firmware Version:                      8.0.0
00:14:08.142  Recommended Arb Burst:                 6
00:14:08.142  IEEE OUI Identifier:                   00 54 52
00:14:08.142  Multi-path I/O
00:14:08.142    May have multiple subsystem ports:   No
00:14:08.142    May have multiple controllers:       No
00:14:08.142    Associated with SR-IOV VF:           No
00:14:08.142  Max Data Transfer Size:                524288
00:14:08.142  Max Number of Namespaces:              256
00:14:08.142  Max Number of I/O Queues:              64
00:14:08.142  NVMe Specification Version (VS):       1.4
00:14:08.142  NVMe Specification Version (Identify): 1.4
00:14:08.142  Maximum Queue Entries:                 2048
00:14:08.142  Contiguous Queues Required:            Yes
00:14:08.142  Arbitration Mechanisms Supported
00:14:08.142    Weighted Round Robin:                Not Supported
00:14:08.142    Vendor Specific:                     Not Supported
00:14:08.142  Reset Timeout:                         7500 ms
00:14:08.142  Doorbell Stride:                       4 bytes
00:14:08.142  NVM Subsystem Reset:                   Not Supported
00:14:08.142  Command Sets Supported
00:14:08.142    NVM Command Set:                     Supported
00:14:08.142  Boot Partition:                        Not Supported
00:14:08.142  Memory Page Size Minimum:              4096 bytes
00:14:08.142  Memory Page Size Maximum:              65536 bytes
00:14:08.142  Persistent Memory Region:              Not Supported
00:14:08.142  Optional Asynchronous Events Supported
00:14:08.142    Namespace Attribute Notices:         Supported
00:14:08.142    Firmware Activation Notices:         Not Supported
00:14:08.142    ANA Change Notices:                  Not Supported
00:14:08.142    PLE Aggregate Log Change Notices:    Not Supported
00:14:08.142    LBA Status Info Alert Notices:       Not Supported
00:14:08.142    EGE Aggregate Log Change Notices:    Not Supported
00:14:08.142    Normal NVM Subsystem Shutdown event: Not Supported
00:14:08.142    Zone Descriptor Change Notices:      Not Supported
00:14:08.142    Discovery Log Change Notices:        Not Supported
00:14:08.142  Controller Attributes
00:14:08.142    128-bit Host Identifier:             Not Supported
00:14:08.142    Non-Operational Permissive Mode:     Not Supported
00:14:08.142    NVM Sets:                            Not Supported
00:14:08.142    Read Recovery Levels:                Not Supported
00:14:08.142    Endurance Groups:                    Not Supported
00:14:08.142    Predictable Latency Mode:            Not Supported
00:14:08.142    Traffic Based Keep ALive:            Not Supported
00:14:08.142    Namespace Granularity:               Not Supported
00:14:08.142    SQ Associations:                     Not Supported
00:14:08.142    UUID List:                           Not Supported
00:14:08.142    Multi-Domain Subsystem:              Not Supported
00:14:08.142    Fixed Capacity Management:           Not Supported
00:14:08.142    Variable Capacity Management:        Not Supported
00:14:08.142    Delete Endurance Group:              Not Supported
00:14:08.142    Delete NVM Set:                      Not Supported
00:14:08.142    Extended LBA Formats Supported:      Supported
00:14:08.142    Flexible Data Placement Supported:   Not Supported
00:14:08.142  
00:14:08.142  Controller Memory Buffer Support
00:14:08.142  ================================
00:14:08.142  Supported:                             No
00:14:08.142  
00:14:08.142  Persistent Memory Region Support
00:14:08.142  ================================
00:14:08.142  Supported:                             No
00:14:08.142  
00:14:08.142  Admin Command Set Attributes
00:14:08.142  ============================
00:14:08.142  Security Send/Receive:                 Not Supported
00:14:08.142  Format NVM:                            Supported
00:14:08.142  Firmware Activate/Download:            Not Supported
00:14:08.142  Namespace Management:                  Supported
00:14:08.142  Device Self-Test:                      Not Supported
00:14:08.142  Directives:                            Supported
00:14:08.142  NVMe-MI:                               Not Supported
00:14:08.142  Virtualization Management:             Not Supported
00:14:08.142  Doorbell Buffer Config:                Supported
00:14:08.142  Get LBA Status Capability:             Not Supported
00:14:08.142  Command & Feature Lockdown Capability: Not Supported
00:14:08.142  Abort Command Limit:                   4
00:14:08.142  Async Event Request Limit:             4
00:14:08.142  Number of Firmware Slots:              N/A
00:14:08.142  Firmware Slot 1 Read-Only:             N/A
00:14:08.142  Firmware Activation Without Reset:     N/A
00:14:08.142  Multiple Update Detection Support:     N/A
00:14:08.142  Firmware Update Granularity:           No Information Provided
00:14:08.142  Per-Namespace SMART Log:               Yes
00:14:08.142  Asymmetric Namespace Access Log Page:  Not Supported
00:14:08.142  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:14:08.142  Command Effects Log Page:              Supported
00:14:08.142  Get Log Page Extended Data:            Supported
00:14:08.142  Telemetry Log Pages:                   Not Supported
00:14:08.142  Persistent Event Log Pages:            Not Supported
00:14:08.142  Supported Log Pages Log Page:          May Support
00:14:08.142  Commands Supported & Effects Log Page: Not Supported
00:14:08.142  Feature Identifiers & Effects Log Page:May Support
00:14:08.142  NVMe-MI Commands & Effects Log Page:   May Support
00:14:08.142  Data Area 4 for Telemetry Log:         Not Supported
00:14:08.142  Error Log Page Entries Supported:      1
00:14:08.142  Keep Alive:                            Not Supported
00:14:08.142  
00:14:08.142  NVM Command Set Attributes
00:14:08.142  ==========================
00:14:08.142  Submission Queue Entry Size
00:14:08.142    Max:                       64
00:14:08.142    Min:                       64
00:14:08.142  Completion Queue Entry Size
00:14:08.142    Max:                       16
00:14:08.142    Min:                       16
00:14:08.142  Number of Namespaces:        256
00:14:08.142  Compare Command:             Supported
00:14:08.142  Write Uncorrectable Command: Not Supported
00:14:08.142  Dataset Management Command:  Supported
00:14:08.142  Write Zeroes Command:        Supported
00:14:08.142  Set Features Save Field:     Supported
00:14:08.142  Reservations:                Not Supported
00:14:08.142  Timestamp:                   Supported
00:14:08.143  Copy:                        Supported
00:14:08.143  Volatile Write Cache:        Present
00:14:08.143  Atomic Write Unit (Normal):  1
00:14:08.143  Atomic Write Unit (PFail):   1
00:14:08.143  Atomic Compare & Write Unit: 1
00:14:08.143  Fused Compare & Write:       Not Supported
00:14:08.143  Scatter-Gather List
00:14:08.143    SGL Command Set:           Supported
00:14:08.143    SGL Keyed:                 Not Supported
00:14:08.143    SGL Bit Bucket Descriptor: Not Supported
00:14:08.143    SGL Metadata Pointer:      Not Supported
00:14:08.143    Oversized SGL:             Not Supported
00:14:08.143    SGL Metadata Address:      Not Supported
00:14:08.143    SGL Offset:                Not Supported
00:14:08.143    Transport SGL Data Block:  Not Supported
00:14:08.143  Replay Protected Memory Block:  Not Supported
00:14:08.143  
00:14:08.143  Firmware Slot Information
00:14:08.143  =========================
00:14:08.143  Active slot:                 1
00:14:08.143  Slot 1 Firmware Revision:    1.0
00:14:08.143  
00:14:08.143  
00:14:08.143  Commands Supported and Effects
00:14:08.143  ==============================
00:14:08.143  Admin Commands
00:14:08.143  --------------
00:14:08.143     Delete I/O Submission Queue (00h): Supported 
00:14:08.143     Create I/O Submission Queue (01h): Supported 
00:14:08.143                    Get Log Page (02h): Supported 
00:14:08.143     Delete I/O Completion Queue (04h): Supported 
00:14:08.143     Create I/O Completion Queue (05h): Supported 
00:14:08.143                        Identify (06h): Supported 
00:14:08.143                           Abort (08h): Supported 
00:14:08.143                    Set Features (09h): Supported 
00:14:08.143                    Get Features (0Ah): Supported 
00:14:08.143      Asynchronous Event Request (0Ch): Supported 
00:14:08.143            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:14:08.143                  Directive Send (19h): Supported 
00:14:08.143               Directive Receive (1Ah): Supported 
00:14:08.143       Virtualization Management (1Ch): Supported 
00:14:08.143          Doorbell Buffer Config (7Ch): Supported 
00:14:08.143                      Format NVM (80h): Supported LBA-Change 
00:14:08.143  I/O Commands
00:14:08.143  ------------
00:14:08.143                           Flush (00h): Supported LBA-Change 
00:14:08.143                           Write (01h): Supported LBA-Change 
00:14:08.143                            Read (02h): Supported 
00:14:08.143                         Compare (05h): Supported 
00:14:08.143                    Write Zeroes (08h): Supported LBA-Change 
00:14:08.143              Dataset Management (09h): Supported LBA-Change 
00:14:08.143                         Unknown (0Ch): Supported 
00:14:08.143                         Unknown (12h): Supported 
00:14:08.143                            Copy (19h): Supported LBA-Change 
00:14:08.143                         Unknown (1Dh): Supported LBA-Change 
00:14:08.143  
00:14:08.143  Error Log
00:14:08.143  =========
00:14:08.143  
00:14:08.143  Arbitration
00:14:08.143  ===========
00:14:08.143  Arbitration Burst:           no limit
00:14:08.143  
00:14:08.143  Power Management
00:14:08.143  ================
00:14:08.143  Number of Power States:          1
00:14:08.143  Current Power State:             Power State #0
00:14:08.143  Power State #0:
00:14:08.143    Max Power:                     25.00 W
00:14:08.143    Non-Operational State:         Operational
00:14:08.143    Entry Latency:                 16 microseconds
00:14:08.143    Exit Latency:                  4 microseconds
00:14:08.143    Relative Read Throughput:      0
00:14:08.143    Relative Read Latency:         0
00:14:08.143    Relative Write Throughput:     0
00:14:08.143    Relative Write Latency:        0
00:14:08.143    Idle Power[2024-11-18 05:59:28.925075] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 88066 terminated unexpected
00:14:08.143  :                     Not Reported
00:14:08.143    Active Power:                   Not Reported
00:14:08.143  Non-Operational Permissive Mode: Not Supported
00:14:08.143  
00:14:08.143  Health Information
00:14:08.143  ==================
00:14:08.143  Critical Warnings:
00:14:08.143    Available Spare Space:     OK
00:14:08.143    Temperature:               OK
00:14:08.143    Device Reliability:        OK
00:14:08.143    Read Only:                 No
00:14:08.143    Volatile Memory Backup:    OK
00:14:08.143  Current Temperature:         323 Kelvin (50 Celsius)
00:14:08.143  Temperature Threshold:       343 Kelvin (70 Celsius)
00:14:08.143  Available Spare:             0%
00:14:08.143  Available Spare Threshold:   0%
00:14:08.143  Life Percentage Used:        0%
00:14:08.143  Data Units Read:             4215
00:14:08.143  Data Units Written:          3946
00:14:08.143  Host Read Commands:          227968
00:14:08.143  Host Write Commands:         242475
00:14:08.143  Controller Busy Time:        0 minutes
00:14:08.143  Power Cycles:                0
00:14:08.143  Power On Hours:              0 hours
00:14:08.143  Unsafe Shutdowns:            0
00:14:08.143  Unrecoverable Media Errors:  0
00:14:08.143  Lifetime Error Log Entries:  0
00:14:08.143  Warning Temperature Time:    0 minutes
00:14:08.143  Critical Temperature Time:   0 minutes
00:14:08.143  
00:14:08.143  Number of Queues
00:14:08.143  ================
00:14:08.143  Number of I/O Submission Queues:      64
00:14:08.143  Number of I/O Completion Queues:      64
00:14:08.143  
00:14:08.143  ZNS Specific Controller Data
00:14:08.143  ============================
00:14:08.143  Zone Append Size Limit:      0
00:14:08.143  
00:14:08.143  
00:14:08.143  Active Namespaces
00:14:08.143  =================
00:14:08.143  Namespace ID:1
00:14:08.143  Error Recovery Timeout:                Unlimited
00:14:08.143  Command Set Identifier:                NVM (00h)
00:14:08.143  Deallocate:                            Supported
00:14:08.143  Deallocated/Unwritten Error:           Supported
00:14:08.143  Deallocated Read Value:                All 0x00
00:14:08.143  Deallocate in Write Zeroes:            Not Supported
00:14:08.143  Deallocated Guard Field:               0xFFFF
00:14:08.143  Flush:                                 Supported
00:14:08.143  Reservation:                           Not Supported
00:14:08.143  Namespace Sharing Capabilities:        Private
00:14:08.143  Size (in LBAs):                        1310720 (5GiB)
00:14:08.143  Capacity (in LBAs):                    1310720 (5GiB)
00:14:08.143  Utilization (in LBAs):                 1310720 (5GiB)
00:14:08.143  Thin Provisioning:                     Not Supported
00:14:08.143  Per-NS Atomic Units:                   No
00:14:08.143  Maximum Single Source Range Length:    128
00:14:08.143  Maximum Copy Length:                   128
00:14:08.143  Maximum Source Range Count:            128
00:14:08.143  NGUID/EUI64 Never Reused:              No
00:14:08.143  Namespace Write Protected:             No
00:14:08.143  Number of LBA Formats:                 8
00:14:08.143  Current LBA Format:                    LBA Format #04
00:14:08.143  LBA Format #00: Data Size:   512  Metadata Size:     0
00:14:08.143  LBA Format #01: Data Size:   512  Metadata Size:     8
00:14:08.143  LBA Format #02: Data Size:   512  Metadata Size:    16
00:14:08.143  LBA Format #03: Data Size:   512  Metadata Size:    64
00:14:08.143  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:14:08.143  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:14:08.143  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:14:08.143  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:14:08.143  
00:14:08.143  NVM Specific Namespace Data
00:14:08.143  ===========================
00:14:08.143  Logical Block Storage Tag Mask:               0
00:14:08.143  Protection Information Capabilities:
00:14:08.143    16b Guard Protection Information Storage Tag Support:  No
00:14:08.143    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:14:08.143    Storage Tag Check Read Support:                        No
00:14:08.143  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.143   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:14:08.143   05:59:28 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:14:08.404  =====================================================
00:14:08.404  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:08.404  =====================================================
00:14:08.404  Controller Capabilities/Features
00:14:08.404  ================================
00:14:08.404  Vendor ID:                             1b36
00:14:08.404  Subsystem Vendor ID:                   1af4
00:14:08.404  Serial Number:                         12340
00:14:08.404  Model Number:                          QEMU NVMe Ctrl
00:14:08.404  Firmware Version:                      8.0.0
00:14:08.404  Recommended Arb Burst:                 6
00:14:08.404  IEEE OUI Identifier:                   00 54 52
00:14:08.404  Multi-path I/O
00:14:08.404    May have multiple subsystem ports:   No
00:14:08.404    May have multiple controllers:       No
00:14:08.404    Associated with SR-IOV VF:           No
00:14:08.404  Max Data Transfer Size:                524288
00:14:08.404  Max Number of Namespaces:              256
00:14:08.404  Max Number of I/O Queues:              64
00:14:08.404  NVMe Specification Version (VS):       1.4
00:14:08.404  NVMe Specification Version (Identify): 1.4
00:14:08.404  Maximum Queue Entries:                 2048
00:14:08.404  Contiguous Queues Required:            Yes
00:14:08.404  Arbitration Mechanisms Supported
00:14:08.404    Weighted Round Robin:                Not Supported
00:14:08.404    Vendor Specific:                     Not Supported
00:14:08.404  Reset Timeout:                         7500 ms
00:14:08.404  Doorbell Stride:                       4 bytes
00:14:08.405  NVM Subsystem Reset:                   Not Supported
00:14:08.405  Command Sets Supported
00:14:08.405    NVM Command Set:                     Supported
00:14:08.405  Boot Partition:                        Not Supported
00:14:08.405  Memory Page Size Minimum:              4096 bytes
00:14:08.405  Memory Page Size Maximum:              65536 bytes
00:14:08.405  Persistent Memory Region:              Not Supported
00:14:08.405  Optional Asynchronous Events Supported
00:14:08.405    Namespace Attribute Notices:         Supported
00:14:08.405    Firmware Activation Notices:         Not Supported
00:14:08.405    ANA Change Notices:                  Not Supported
00:14:08.405    PLE Aggregate Log Change Notices:    Not Supported
00:14:08.405    LBA Status Info Alert Notices:       Not Supported
00:14:08.405    EGE Aggregate Log Change Notices:    Not Supported
00:14:08.405    Normal NVM Subsystem Shutdown event: Not Supported
00:14:08.405    Zone Descriptor Change Notices:      Not Supported
00:14:08.405    Discovery Log Change Notices:        Not Supported
00:14:08.405  Controller Attributes
00:14:08.405    128-bit Host Identifier:             Not Supported
00:14:08.405    Non-Operational Permissive Mode:     Not Supported
00:14:08.405    NVM Sets:                            Not Supported
00:14:08.405    Read Recovery Levels:                Not Supported
00:14:08.405    Endurance Groups:                    Not Supported
00:14:08.405    Predictable Latency Mode:            Not Supported
00:14:08.405    Traffic Based Keep ALive:            Not Supported
00:14:08.405    Namespace Granularity:               Not Supported
00:14:08.405    SQ Associations:                     Not Supported
00:14:08.405    UUID List:                           Not Supported
00:14:08.405    Multi-Domain Subsystem:              Not Supported
00:14:08.405    Fixed Capacity Management:           Not Supported
00:14:08.405    Variable Capacity Management:        Not Supported
00:14:08.405    Delete Endurance Group:              Not Supported
00:14:08.405    Delete NVM Set:                      Not Supported
00:14:08.405    Extended LBA Formats Supported:      Supported
00:14:08.405    Flexible Data Placement Supported:   Not Supported
00:14:08.405  
00:14:08.405  Controller Memory Buffer Support
00:14:08.405  ================================
00:14:08.405  Supported:                             No
00:14:08.405  
00:14:08.405  Persistent Memory Region Support
00:14:08.405  ================================
00:14:08.405  Supported:                             No
00:14:08.405  
00:14:08.405  Admin Command Set Attributes
00:14:08.405  ============================
00:14:08.405  Security Send/Receive:                 Not Supported
00:14:08.405  Format NVM:                            Supported
00:14:08.405  Firmware Activate/Download:            Not Supported
00:14:08.405  Namespace Management:                  Supported
00:14:08.405  Device Self-Test:                      Not Supported
00:14:08.405  Directives:                            Supported
00:14:08.405  NVMe-MI:                               Not Supported
00:14:08.405  Virtualization Management:             Not Supported
00:14:08.405  Doorbell Buffer Config:                Supported
00:14:08.405  Get LBA Status Capability:             Not Supported
00:14:08.405  Command & Feature Lockdown Capability: Not Supported
00:14:08.405  Abort Command Limit:                   4
00:14:08.405  Async Event Request Limit:             4
00:14:08.405  Number of Firmware Slots:              N/A
00:14:08.405  Firmware Slot 1 Read-Only:             N/A
00:14:08.405  Firmware Activation Without Reset:     N/A
00:14:08.405  Multiple Update Detection Support:     N/A
00:14:08.405  Firmware Update Granularity:           No Information Provided
00:14:08.405  Per-Namespace SMART Log:               Yes
00:14:08.405  Asymmetric Namespace Access Log Page:  Not Supported
00:14:08.405  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:14:08.405  Command Effects Log Page:              Supported
00:14:08.405  Get Log Page Extended Data:            Supported
00:14:08.405  Telemetry Log Pages:                   Not Supported
00:14:08.405  Persistent Event Log Pages:            Not Supported
00:14:08.405  Supported Log Pages Log Page:          May Support
00:14:08.405  Commands Supported & Effects Log Page: Not Supported
00:14:08.405  Feature Identifiers & Effects Log Page:May Support
00:14:08.405  NVMe-MI Commands & Effects Log Page:   May Support
00:14:08.405  Data Area 4 for Telemetry Log:         Not Supported
00:14:08.405  Error Log Page Entries Supported:      1
00:14:08.405  Keep Alive:                            Not Supported
00:14:08.405  
00:14:08.405  NVM Command Set Attributes
00:14:08.405  ==========================
00:14:08.405  Submission Queue Entry Size
00:14:08.405    Max:                       64
00:14:08.405    Min:                       64
00:14:08.405  Completion Queue Entry Size
00:14:08.405    Max:                       16
00:14:08.405    Min:                       16
00:14:08.405  Number of Namespaces:        256
00:14:08.405  Compare Command:             Supported
00:14:08.405  Write Uncorrectable Command: Not Supported
00:14:08.405  Dataset Management Command:  Supported
00:14:08.405  Write Zeroes Command:        Supported
00:14:08.405  Set Features Save Field:     Supported
00:14:08.405  Reservations:                Not Supported
00:14:08.405  Timestamp:                   Supported
00:14:08.405  Copy:                        Supported
00:14:08.405  Volatile Write Cache:        Present
00:14:08.405  Atomic Write Unit (Normal):  1
00:14:08.405  Atomic Write Unit (PFail):   1
00:14:08.405  Atomic Compare & Write Unit: 1
00:14:08.405  Fused Compare & Write:       Not Supported
00:14:08.405  Scatter-Gather List
00:14:08.405    SGL Command Set:           Supported
00:14:08.405    SGL Keyed:                 Not Supported
00:14:08.405    SGL Bit Bucket Descriptor: Not Supported
00:14:08.405    SGL Metadata Pointer:      Not Supported
00:14:08.405    Oversized SGL:             Not Supported
00:14:08.405    SGL Metadata Address:      Not Supported
00:14:08.405    SGL Offset:                Not Supported
00:14:08.405    Transport SGL Data Block:  Not Supported
00:14:08.405  Replay Protected Memory Block:  Not Supported
00:14:08.405  
00:14:08.405  Firmware Slot Information
00:14:08.405  =========================
00:14:08.405  Active slot:                 1
00:14:08.405  Slot 1 Firmware Revision:    1.0
00:14:08.405  
00:14:08.405  
00:14:08.405  Commands Supported and Effects
00:14:08.405  ==============================
00:14:08.405  Admin Commands
00:14:08.405  --------------
00:14:08.405     Delete I/O Submission Queue (00h): Supported 
00:14:08.405     Create I/O Submission Queue (01h): Supported 
00:14:08.405                    Get Log Page (02h): Supported 
00:14:08.405     Delete I/O Completion Queue (04h): Supported 
00:14:08.405     Create I/O Completion Queue (05h): Supported 
00:14:08.405                        Identify (06h): Supported 
00:14:08.405                           Abort (08h): Supported 
00:14:08.405                    Set Features (09h): Supported 
00:14:08.405                    Get Features (0Ah): Supported 
00:14:08.405      Asynchronous Event Request (0Ch): Supported 
00:14:08.405            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:14:08.405                  Directive Send (19h): Supported 
00:14:08.405               Directive Receive (1Ah): Supported 
00:14:08.405       Virtualization Management (1Ch): Supported 
00:14:08.405          Doorbell Buffer Config (7Ch): Supported 
00:14:08.405                      Format NVM (80h): Supported LBA-Change 
00:14:08.405  I/O Commands
00:14:08.405  ------------
00:14:08.405                           Flush (00h): Supported LBA-Change 
00:14:08.405                           Write (01h): Supported LBA-Change 
00:14:08.405                            Read (02h): Supported 
00:14:08.405                         Compare (05h): Supported 
00:14:08.405                    Write Zeroes (08h): Supported LBA-Change 
00:14:08.405              Dataset Management (09h): Supported LBA-Change 
00:14:08.405                         Unknown (0Ch): Supported 
00:14:08.405                         Unknown (12h): Supported 
00:14:08.405                            Copy (19h): Supported LBA-Change 
00:14:08.405                         Unknown (1Dh): Supported LBA-Change 
00:14:08.405  
00:14:08.405  Error Log
00:14:08.405  =========
00:14:08.405  
00:14:08.405  Arbitration
00:14:08.405  ===========
00:14:08.405  Arbitration Burst:           no limit
00:14:08.405  
00:14:08.405  Power Management
00:14:08.405  ================
00:14:08.405  Number of Power States:          1
00:14:08.405  Current Power State:             Power State #0
00:14:08.405  Power State #0:
00:14:08.405    Max Power:                     25.00 W
00:14:08.405    Non-Operational State:         Operational
00:14:08.405    Entry Latency:                 16 microseconds
00:14:08.405    Exit Latency:                  4 microseconds
00:14:08.405    Relative Read Throughput:      0
00:14:08.405    Relative Read Latency:         0
00:14:08.405    Relative Write Throughput:     0
00:14:08.405    Relative Write Latency:        0
00:14:08.405    Idle Power:                     Not Reported
00:14:08.405    Active Power:                   Not Reported
00:14:08.405  Non-Operational Permissive Mode: Not Supported
00:14:08.405  
00:14:08.405  Health Information
00:14:08.405  ==================
00:14:08.405  Critical Warnings:
00:14:08.405    Available Spare Space:     OK
00:14:08.405    Temperature:               OK
00:14:08.405    Device Reliability:        OK
00:14:08.405    Read Only:                 No
00:14:08.405    Volatile Memory Backup:    OK
00:14:08.405  Current Temperature:         323 Kelvin (50 Celsius)
00:14:08.405  Temperature Threshold:       343 Kelvin (70 Celsius)
00:14:08.405  Available Spare:             0%
00:14:08.405  Available Spare Threshold:   0%
00:14:08.405  Life Percentage Used:        0%
00:14:08.405  Data Units Read:             4215
00:14:08.405  Data Units Written:          3946
00:14:08.405  Host Read Commands:          227968
00:14:08.405  Host Write Commands:         242475
00:14:08.405  Controller Busy Time:        0 minutes
00:14:08.405  Power Cycles:                0
00:14:08.405  Power On Hours:              0 hours
00:14:08.405  Unsafe Shutdowns:            0
00:14:08.405  Unrecoverable Media Errors:  0
00:14:08.405  Lifetime Error Log Entries:  0
00:14:08.405  Warning Temperature Time:    0 minutes
00:14:08.405  Critical Temperature Time:   0 minutes
00:14:08.405  
00:14:08.405  Number of Queues
00:14:08.405  ================
00:14:08.405  Number of I/O Submission Queues:      64
00:14:08.405  Number of I/O Completion Queues:      64
00:14:08.405  
00:14:08.405  ZNS Specific Controller Data
00:14:08.405  ============================
00:14:08.405  Zone Append Size Limit:      0
00:14:08.405  
00:14:08.405  
00:14:08.405  Active Namespaces
00:14:08.405  =================
00:14:08.405  Namespace ID:1
00:14:08.405  Error Recovery Timeout:                Unlimited
00:14:08.405  Command Set Identifier:                NVM (00h)
00:14:08.405  Deallocate:                            Supported
00:14:08.405  Deallocated/Unwritten Error:           Supported
00:14:08.405  Deallocated Read Value:                All 0x00
00:14:08.405  Deallocate in Write Zeroes:            Not Supported
00:14:08.405  Deallocated Guard Field:               0xFFFF
00:14:08.405  Flush:                                 Supported
00:14:08.405  Reservation:                           Not Supported
00:14:08.405  Namespace Sharing Capabilities:        Private
00:14:08.405  Size (in LBAs):                        1310720 (5GiB)
00:14:08.405  Capacity (in LBAs):                    1310720 (5GiB)
00:14:08.405  Utilization (in LBAs):                 1310720 (5GiB)
00:14:08.405  Thin Provisioning:                     Not Supported
00:14:08.405  Per-NS Atomic Units:                   No
00:14:08.405  Maximum Single Source Range Length:    128
00:14:08.405  Maximum Copy Length:                   128
00:14:08.405  Maximum Source Range Count:            128
00:14:08.405  NGUID/EUI64 Never Reused:              No
00:14:08.405  Namespace Write Protected:             No
00:14:08.405  Number of LBA Formats:                 8
00:14:08.406  Current LBA Format:                    LBA Format #04
00:14:08.406  LBA Format #00: Data Size:   512  Metadata Size:     0
00:14:08.406  LBA Format #01: Data Size:   512  Metadata Size:     8
00:14:08.406  LBA Format #02: Data Size:   512  Metadata Size:    16
00:14:08.406  LBA Format #03: Data Size:   512  Metadata Size:    64
00:14:08.406  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:14:08.406  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:14:08.406  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:14:08.406  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:14:08.406  
00:14:08.406  NVM Specific Namespace Data
00:14:08.406  ===========================
00:14:08.406  Logical Block Storage Tag Mask:               0
00:14:08.406  Protection Information Capabilities:
00:14:08.406    16b Guard Protection Information Storage Tag Support:  No
00:14:08.406    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:14:08.406    Storage Tag Check Read Support:                        No
00:14:08.406  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:08.406  
00:14:08.406  real	0m0.597s
00:14:08.406  user	0m0.231s
00:14:08.406  sys	0m0.286s
00:14:08.406   05:59:29 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:08.406   05:59:29 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:14:08.406  ************************************
00:14:08.406  END TEST nvme_identify
00:14:08.406  ************************************
00:14:08.406   05:59:29 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:14:08.406   05:59:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:08.406   05:59:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:08.406   05:59:29 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:08.406  ************************************
00:14:08.406  START TEST nvme_perf
00:14:08.406  ************************************
00:14:08.406   05:59:29 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:14:08.406   05:59:29 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:14:09.786  Initializing NVMe Controllers
00:14:09.786  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:09.786  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:09.786  Initialization complete. Launching workers.
00:14:09.786  ========================================================
00:14:09.786                                                                             Latency(us)
00:14:09.786  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:09.786  PCIE (0000:00:10.0) NSID 1 from core  0:   87099.47    1020.70    1468.57     654.05    5088.82
00:14:09.786  ========================================================
00:14:09.786  Total                                  :   87099.47    1020.70    1468.57     654.05    5088.82
00:14:09.786  
00:14:09.786  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:09.786  =================================================================================
00:14:09.786    1.00000% :   789.411us
00:14:09.786   10.00000% :   968.145us
00:14:09.786   25.00000% :  1146.880us
00:14:09.786   50.00000% :  1429.876us
00:14:09.786   75.00000% :  1697.978us
00:14:09.786   90.00000% :  1995.869us
00:14:09.786   95.00000% :  2308.655us
00:14:09.786   98.00000% :  2636.335us
00:14:09.786   99.00000% :  2800.175us
00:14:09.786   99.50000% :  3053.382us
00:14:09.786   99.90000% :  3961.949us
00:14:09.786   99.99000% :  4915.200us
00:14:09.786   99.99900% :  5093.935us
00:14:09.786   99.99990% :  5093.935us
00:14:09.786   99.99999% :  5093.935us
00:14:09.786  
00:14:09.786  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:09.786  ==============================================================================
00:14:09.786         Range in us     Cumulative    IO count
00:14:09.786    651.636 -   655.360:    0.0023%  (        2)
00:14:09.786    655.360 -   659.084:    0.0046%  (        2)
00:14:09.786    659.084 -   662.807:    0.0057%  (        1)
00:14:09.786    666.531 -   670.255:    0.0092%  (        3)
00:14:09.786    670.255 -   673.978:    0.0126%  (        3)
00:14:09.786    677.702 -   681.425:    0.0195%  (        6)
00:14:09.786    681.425 -   685.149:    0.0253%  (        5)
00:14:09.786    685.149 -   688.873:    0.0298%  (        4)
00:14:09.786    688.873 -   692.596:    0.0367%  (        6)
00:14:09.786    692.596 -   696.320:    0.0471%  (        9)
00:14:09.786    696.320 -   700.044:    0.0654%  (       16)
00:14:09.786    700.044 -   703.767:    0.0781%  (       11)
00:14:09.786    703.767 -   707.491:    0.0930%  (       13)
00:14:09.786    707.491 -   711.215:    0.1045%  (       10)
00:14:09.786    711.215 -   714.938:    0.1160%  (       10)
00:14:09.786    714.938 -   718.662:    0.1378%  (       19)
00:14:09.786    718.662 -   722.385:    0.1676%  (       26)
00:14:09.786    722.385 -   726.109:    0.1883%  (       18)
00:14:09.786    726.109 -   729.833:    0.2239%  (       31)
00:14:09.786    729.833 -   733.556:    0.2549%  (       27)
00:14:09.786    733.556 -   737.280:    0.2847%  (       26)
00:14:09.786    737.280 -   741.004:    0.3134%  (       25)
00:14:09.786    741.004 -   744.727:    0.3536%  (       35)
00:14:09.786    744.727 -   748.451:    0.3938%  (       35)
00:14:09.786    748.451 -   752.175:    0.4340%  (       35)
00:14:09.786    752.175 -   755.898:    0.4764%  (       37)
00:14:09.786    755.898 -   759.622:    0.5155%  (       34)
00:14:09.786    759.622 -   763.345:    0.5683%  (       46)
00:14:09.786    763.345 -   767.069:    0.6268%  (       51)
00:14:09.786    767.069 -   770.793:    0.6808%  (       47)
00:14:09.786    770.793 -   774.516:    0.7290%  (       42)
00:14:09.786    774.516 -   778.240:    0.7876%  (       51)
00:14:09.786    778.240 -   781.964:    0.8622%  (       65)
00:14:09.786    781.964 -   785.687:    0.9334%  (       62)
00:14:09.786    785.687 -   789.411:    1.0057%  (       63)
00:14:09.786    789.411 -   793.135:    1.0964%  (       79)
00:14:09.786    793.135 -   796.858:    1.1917%  (       83)
00:14:09.786    796.858 -   800.582:    1.2801%  (       77)
00:14:09.786    800.582 -   804.305:    1.3673%  (       76)
00:14:09.786    804.305 -   808.029:    1.4752%  (       94)
00:14:09.786    808.029 -   811.753:    1.5832%  (       94)
00:14:09.786    811.753 -   815.476:    1.6945%  (       97)
00:14:09.786    815.476 -   819.200:    1.7967%  (       89)
00:14:09.786    819.200 -   822.924:    1.9230%  (      110)
00:14:09.786    822.924 -   826.647:    2.0539%  (      114)
00:14:09.786    826.647 -   830.371:    2.1767%  (      107)
00:14:09.786    830.371 -   834.095:    2.3191%  (      124)
00:14:09.786    834.095 -   837.818:    2.4626%  (      125)
00:14:09.786    837.818 -   841.542:    2.6153%  (      133)
00:14:09.786    841.542 -   845.265:    2.7760%  (      140)
00:14:09.786    845.265 -   848.989:    2.9608%  (      161)
00:14:09.786    848.989 -   852.713:    3.1158%  (      135)
00:14:09.786    852.713 -   856.436:    3.2651%  (      130)
00:14:09.786    856.436 -   860.160:    3.4648%  (      174)
00:14:09.786    860.160 -   863.884:    3.6646%  (      174)
00:14:09.786    863.884 -   867.607:    3.8471%  (      159)
00:14:09.786    867.607 -   871.331:    4.0423%  (      170)
00:14:09.786    871.331 -   875.055:    4.2409%  (      173)
00:14:09.786    875.055 -   878.778:    4.4625%  (      193)
00:14:09.786    878.778 -   882.502:    4.6542%  (      167)
00:14:09.786    882.502 -   886.225:    4.8643%  (      183)
00:14:09.786    886.225 -   889.949:    5.0847%  (      192)
00:14:09.786    889.949 -   893.673:    5.3086%  (      195)
00:14:09.786    893.673 -   897.396:    5.5302%  (      193)
00:14:09.786    897.396 -   901.120:    5.7609%  (      201)
00:14:09.786    901.120 -   904.844:    6.0101%  (      217)
00:14:09.786    904.844 -   908.567:    6.2385%  (      199)
00:14:09.786    908.567 -   912.291:    6.4980%  (      226)
00:14:09.786    912.291 -   916.015:    6.7391%  (      210)
00:14:09.786    916.015 -   919.738:    6.9939%  (      222)
00:14:09.786    919.738 -   923.462:    7.2155%  (      193)
00:14:09.786    923.462 -   927.185:    7.4887%  (      238)
00:14:09.786    927.185 -   930.909:    7.7218%  (      203)
00:14:09.786    930.909 -   934.633:    7.9744%  (      220)
00:14:09.786    934.633 -   938.356:    8.2511%  (      241)
00:14:09.787    938.356 -   942.080:    8.5117%  (      227)
00:14:09.787    942.080 -   945.804:    8.7849%  (      238)
00:14:09.787    945.804 -   949.527:    9.0478%  (      229)
00:14:09.787    949.527 -   953.251:    9.3176%  (      235)
00:14:09.787    953.251 -   960.698:    9.8928%  (      501)
00:14:09.787    960.698 -   968.145:   10.4484%  (      484)
00:14:09.787    968.145 -   975.593:   11.0270%  (      504)
00:14:09.787    975.593 -   983.040:   11.5896%  (      490)
00:14:09.787    983.040 -   990.487:   12.1877%  (      521)
00:14:09.787    990.487 -   997.935:   12.7744%  (      511)
00:14:09.787    997.935 -  1005.382:   13.3978%  (      543)
00:14:09.787   1005.382 -  1012.829:   13.9787%  (      506)
00:14:09.787   1012.829 -  1020.276:   14.6078%  (      548)
00:14:09.787   1020.276 -  1027.724:   15.2037%  (      519)
00:14:09.787   1027.724 -  1035.171:   15.8248%  (      541)
00:14:09.787   1035.171 -  1042.618:   16.4240%  (      522)
00:14:09.787   1042.618 -  1050.065:   17.0348%  (      532)
00:14:09.787   1050.065 -  1057.513:   17.6525%  (      538)
00:14:09.787   1057.513 -  1064.960:   18.2414%  (      513)
00:14:09.787   1064.960 -  1072.407:   18.8453%  (      526)
00:14:09.787   1072.407 -  1079.855:   19.4538%  (      530)
00:14:09.787   1079.855 -  1087.302:   20.0898%  (      554)
00:14:09.787   1087.302 -  1094.749:   20.6914%  (      524)
00:14:09.787   1094.749 -  1102.196:   21.3538%  (      577)
00:14:09.787   1102.196 -  1109.644:   21.9531%  (      522)
00:14:09.787   1109.644 -  1117.091:   22.5960%  (      560)
00:14:09.787   1117.091 -  1124.538:   23.2205%  (      544)
00:14:09.787   1124.538 -  1131.985:   23.8485%  (      547)
00:14:09.787   1131.985 -  1139.433:   24.4857%  (      555)
00:14:09.787   1139.433 -  1146.880:   25.1205%  (      553)
00:14:09.787   1146.880 -  1154.327:   25.7669%  (      563)
00:14:09.787   1154.327 -  1161.775:   26.3788%  (      533)
00:14:09.787   1161.775 -  1169.222:   27.0378%  (      574)
00:14:09.787   1169.222 -  1176.669:   27.6819%  (      561)
00:14:09.787   1176.669 -  1184.116:   28.3167%  (      553)
00:14:09.787   1184.116 -  1191.564:   28.9436%  (      546)
00:14:09.787   1191.564 -  1199.011:   29.6140%  (      584)
00:14:09.787   1199.011 -  1206.458:   30.2477%  (      552)
00:14:09.787   1206.458 -  1213.905:   30.8941%  (      563)
00:14:09.787   1213.905 -  1221.353:   31.5152%  (      541)
00:14:09.787   1221.353 -  1228.800:   32.1742%  (      574)
00:14:09.787   1228.800 -  1236.247:   32.8355%  (      576)
00:14:09.787   1236.247 -  1243.695:   33.4680%  (      551)
00:14:09.787   1243.695 -  1251.142:   34.1282%  (      575)
00:14:09.787   1251.142 -  1258.589:   34.7791%  (      567)
00:14:09.787   1258.589 -  1266.036:   35.4301%  (      567)
00:14:09.787   1266.036 -  1273.484:   36.0845%  (      570)
00:14:09.787   1273.484 -  1280.931:   36.7446%  (      575)
00:14:09.787   1280.931 -  1288.378:   37.4300%  (      597)
00:14:09.787   1288.378 -  1295.825:   38.1108%  (      593)
00:14:09.787   1295.825 -  1303.273:   38.7629%  (      568)
00:14:09.787   1303.273 -  1310.720:   39.4724%  (      618)
00:14:09.787   1310.720 -  1318.167:   40.1153%  (      560)
00:14:09.787   1318.167 -  1325.615:   40.8225%  (      616)
00:14:09.787   1325.615 -  1333.062:   41.5090%  (      598)
00:14:09.787   1333.062 -  1340.509:   42.1864%  (      590)
00:14:09.787   1340.509 -  1347.956:   42.8637%  (      590)
00:14:09.787   1347.956 -  1355.404:   43.5675%  (      613)
00:14:09.787   1355.404 -  1362.851:   44.2666%  (      609)
00:14:09.787   1362.851 -  1370.298:   44.9761%  (      618)
00:14:09.787   1370.298 -  1377.745:   45.6684%  (      603)
00:14:09.787   1377.745 -  1385.193:   46.3572%  (      600)
00:14:09.787   1385.193 -  1392.640:   47.0495%  (      603)
00:14:09.787   1392.640 -  1400.087:   47.7521%  (      612)
00:14:09.787   1400.087 -  1407.535:   48.4582%  (      615)
00:14:09.787   1407.535 -  1414.982:   49.1527%  (      605)
00:14:09.787   1414.982 -  1422.429:   49.8760%  (      630)
00:14:09.787   1422.429 -  1429.876:   50.5568%  (      593)
00:14:09.787   1429.876 -  1437.324:   51.2766%  (      627)
00:14:09.787   1437.324 -  1444.771:   51.9586%  (      594)
00:14:09.787   1444.771 -  1452.218:   52.6635%  (      614)
00:14:09.787   1452.218 -  1459.665:   53.3615%  (      608)
00:14:09.787   1459.665 -  1467.113:   54.0641%  (      612)
00:14:09.787   1467.113 -  1474.560:   54.7713%  (      616)
00:14:09.787   1474.560 -  1482.007:   55.4464%  (      588)
00:14:09.787   1482.007 -  1489.455:   56.1731%  (      633)
00:14:09.787   1489.455 -  1496.902:   56.8516%  (      591)
00:14:09.787   1496.902 -  1504.349:   57.5875%  (      641)
00:14:09.787   1504.349 -  1511.796:   58.2694%  (      594)
00:14:09.787   1511.796 -  1519.244:   58.9766%  (      616)
00:14:09.787   1519.244 -  1526.691:   59.6723%  (      606)
00:14:09.787   1526.691 -  1534.138:   60.3853%  (      621)
00:14:09.787   1534.138 -  1541.585:   61.0684%  (      595)
00:14:09.787   1541.585 -  1549.033:   61.7721%  (      613)
00:14:09.787   1549.033 -  1556.480:   62.4736%  (      611)
00:14:09.787   1556.480 -  1563.927:   63.1544%  (      593)
00:14:09.787   1563.927 -  1571.375:   63.8639%  (      618)
00:14:09.787   1571.375 -  1578.822:   64.5160%  (      568)
00:14:09.787   1578.822 -  1586.269:   65.2278%  (      620)
00:14:09.787   1586.269 -  1593.716:   65.8879%  (      575)
00:14:09.787   1593.716 -  1601.164:   66.5963%  (      617)
00:14:09.787   1601.164 -  1608.611:   67.2621%  (      580)
00:14:09.787   1608.611 -  1616.058:   67.9314%  (      583)
00:14:09.787   1616.058 -  1623.505:   68.6008%  (      583)
00:14:09.787   1623.505 -  1630.953:   69.2770%  (      589)
00:14:09.787   1630.953 -  1638.400:   69.9245%  (      564)
00:14:09.787   1638.400 -  1645.847:   70.6030%  (      591)
00:14:09.787   1645.847 -  1653.295:   71.2401%  (      555)
00:14:09.787   1653.295 -  1660.742:   71.8888%  (      565)
00:14:09.787   1660.742 -  1668.189:   72.5340%  (      562)
00:14:09.787   1668.189 -  1675.636:   73.1826%  (      565)
00:14:09.787   1675.636 -  1683.084:   73.8290%  (      563)
00:14:09.787   1683.084 -  1690.531:   74.4512%  (      542)
00:14:09.787   1690.531 -  1697.978:   75.1079%  (      572)
00:14:09.787   1697.978 -  1705.425:   75.6946%  (      511)
00:14:09.787   1705.425 -  1712.873:   76.2984%  (      526)
00:14:09.787   1712.873 -  1720.320:   76.8656%  (      494)
00:14:09.787   1720.320 -  1727.767:   77.4465%  (      506)
00:14:09.787   1727.767 -  1735.215:   78.0045%  (      486)
00:14:09.787   1735.215 -  1742.662:   78.5337%  (      461)
00:14:09.787   1742.662 -  1750.109:   79.0607%  (      459)
00:14:09.787   1750.109 -  1757.556:   79.5497%  (      426)
00:14:09.787   1757.556 -  1765.004:   80.0377%  (      425)
00:14:09.787   1765.004 -  1772.451:   80.5348%  (      433)
00:14:09.787   1772.451 -  1779.898:   80.9882%  (      395)
00:14:09.787   1779.898 -  1787.345:   81.4509%  (      403)
00:14:09.787   1787.345 -  1794.793:   81.8975%  (      389)
00:14:09.787   1794.793 -  1802.240:   82.3234%  (      371)
00:14:09.787   1802.240 -  1809.687:   82.7585%  (      379)
00:14:09.787   1809.687 -  1817.135:   83.1420%  (      334)
00:14:09.787   1817.135 -  1824.582:   83.5645%  (      368)
00:14:09.787   1824.582 -  1832.029:   83.9284%  (      317)
00:14:09.787   1832.029 -  1839.476:   84.3222%  (      343)
00:14:09.787   1839.476 -  1846.924:   84.6942%  (      324)
00:14:09.787   1846.924 -  1854.371:   85.0386%  (      300)
00:14:09.787   1854.371 -  1861.818:   85.4037%  (      318)
00:14:09.787   1861.818 -  1869.265:   85.7182%  (      274)
00:14:09.787   1869.265 -  1876.713:   86.0408%  (      281)
00:14:09.787   1876.713 -  1884.160:   86.3669%  (      284)
00:14:09.787   1884.160 -  1891.607:   86.6631%  (      258)
00:14:09.787   1891.607 -  1899.055:   86.9799%  (      276)
00:14:09.787   1899.055 -  1906.502:   87.2727%  (      255)
00:14:09.787   1906.502 -  1921.396:   87.8410%  (      495)
00:14:09.787   1921.396 -  1936.291:   88.3691%  (      460)
00:14:09.787   1936.291 -  1951.185:   88.9006%  (      463)
00:14:09.787   1951.185 -  1966.080:   89.4138%  (      447)
00:14:09.787   1966.080 -  1980.975:   89.9006%  (      424)
00:14:09.787   1980.975 -  1995.869:   90.3460%  (      388)
00:14:09.787   1995.869 -  2010.764:   90.7823%  (      380)
00:14:09.787   2010.764 -  2025.658:   91.1795%  (      346)
00:14:09.787   2025.658 -  2040.553:   91.5297%  (      305)
00:14:09.787   2040.553 -  2055.447:   91.8488%  (      278)
00:14:09.787   2055.447 -  2070.342:   92.1427%  (      256)
00:14:09.787   2070.342 -  2085.236:   92.4102%  (      233)
00:14:09.787   2085.236 -  2100.131:   92.6685%  (      225)
00:14:09.787   2100.131 -  2115.025:   92.9039%  (      205)
00:14:09.787   2115.025 -  2129.920:   93.1278%  (      195)
00:14:09.787   2129.920 -  2144.815:   93.3378%  (      183)
00:14:09.787   2144.815 -  2159.709:   93.5388%  (      175)
00:14:09.787   2159.709 -  2174.604:   93.7316%  (      168)
00:14:09.787   2174.604 -  2189.498:   93.9096%  (      155)
00:14:09.787   2189.498 -  2204.393:   94.0841%  (      152)
00:14:09.787   2204.393 -  2219.287:   94.2517%  (      146)
00:14:09.787   2219.287 -  2234.182:   94.4067%  (      135)
00:14:09.787   2234.182 -  2249.076:   94.5628%  (      136)
00:14:09.787   2249.076 -  2263.971:   94.7029%  (      122)
00:14:09.787   2263.971 -  2278.865:   94.8452%  (      124)
00:14:09.787   2278.865 -  2293.760:   94.9876%  (      124)
00:14:09.787   2293.760 -  2308.655:   95.1300%  (      124)
00:14:09.787   2308.655 -  2323.549:   95.2643%  (      117)
00:14:09.787   2323.549 -  2338.444:   95.4055%  (      123)
00:14:09.787   2338.444 -  2353.338:   95.5375%  (      115)
00:14:09.787   2353.338 -  2368.233:   95.6741%  (      119)
00:14:09.787   2368.233 -  2383.127:   95.8062%  (      115)
00:14:09.787   2383.127 -  2398.022:   95.9382%  (      115)
00:14:09.787   2398.022 -  2412.916:   96.0760%  (      120)
00:14:09.787   2412.916 -  2427.811:   96.2080%  (      115)
00:14:09.787   2427.811 -  2442.705:   96.3412%  (      116)
00:14:09.787   2442.705 -  2457.600:   96.4720%  (      114)
00:14:09.787   2457.600 -  2472.495:   96.6064%  (      117)
00:14:09.787   2472.495 -  2487.389:   96.7453%  (      121)
00:14:09.787   2487.389 -  2502.284:   96.8784%  (      116)
00:14:09.787   2502.284 -  2517.178:   97.0059%  (      111)
00:14:09.787   2517.178 -  2532.073:   97.1402%  (      117)
00:14:09.787   2532.073 -  2546.967:   97.2768%  (      119)
00:14:09.787   2546.967 -  2561.862:   97.4020%  (      109)
00:14:09.787   2561.862 -  2576.756:   97.5317%  (      113)
00:14:09.787   2576.756 -  2591.651:   97.6660%  (      117)
00:14:09.787   2591.651 -  2606.545:   97.7923%  (      110)
00:14:09.787   2606.545 -  2621.440:   97.9163%  (      108)
00:14:09.787   2621.440 -  2636.335:   98.0380%  (      106)
00:14:09.787   2636.335 -  2651.229:   98.1562%  (      103)
00:14:09.787   2651.229 -  2666.124:   98.2664%  (       96)
00:14:09.787   2666.124 -  2681.018:   98.3824%  (      101)
00:14:09.787   2681.018 -  2695.913:   98.4846%  (       89)
00:14:09.787   2695.913 -  2710.807:   98.5833%  (       86)
00:14:09.788   2710.807 -  2725.702:   98.6740%  (       79)
00:14:09.788   2725.702 -  2740.596:   98.7532%  (       69)
00:14:09.788   2740.596 -  2755.491:   98.8221%  (       60)
00:14:09.788   2755.491 -  2770.385:   98.8864%  (       56)
00:14:09.788   2770.385 -  2785.280:   98.9484%  (       54)
00:14:09.788   2785.280 -  2800.175:   99.0115%  (       55)
00:14:09.788   2800.175 -  2815.069:   99.0712%  (       52)
00:14:09.788   2815.069 -  2829.964:   99.1206%  (       43)
00:14:09.788   2829.964 -  2844.858:   99.1608%  (       35)
00:14:09.788   2844.858 -  2859.753:   99.2021%  (       36)
00:14:09.788   2859.753 -  2874.647:   99.2365%  (       30)
00:14:09.788   2874.647 -  2889.542:   99.2721%  (       31)
00:14:09.788   2889.542 -  2904.436:   99.2985%  (       23)
00:14:09.788   2904.436 -  2919.331:   99.3261%  (       24)
00:14:09.788   2919.331 -  2934.225:   99.3479%  (       19)
00:14:09.788   2934.225 -  2949.120:   99.3743%  (       23)
00:14:09.788   2949.120 -  2964.015:   99.3984%  (       21)
00:14:09.788   2964.015 -  2978.909:   99.4179%  (       17)
00:14:09.788   2978.909 -  2993.804:   99.4409%  (       20)
00:14:09.788   2993.804 -  3008.698:   99.4581%  (       15)
00:14:09.788   3008.698 -  3023.593:   99.4799%  (       19)
00:14:09.788   3023.593 -  3038.487:   99.4983%  (       16)
00:14:09.788   3038.487 -  3053.382:   99.5201%  (       19)
00:14:09.788   3053.382 -  3068.276:   99.5373%  (       15)
00:14:09.788   3068.276 -  3083.171:   99.5488%  (       10)
00:14:09.788   3083.171 -  3098.065:   99.5637%  (       13)
00:14:09.788   3098.065 -  3112.960:   99.5787%  (       13)
00:14:09.788   3112.960 -  3127.855:   99.5913%  (       11)
00:14:09.788   3127.855 -  3142.749:   99.6028%  (       10)
00:14:09.788   3142.749 -  3157.644:   99.6143%  (       10)
00:14:09.788   3157.644 -  3172.538:   99.6246%  (        9)
00:14:09.788   3172.538 -  3187.433:   99.6303%  (        5)
00:14:09.788   3187.433 -  3202.327:   99.6361%  (        5)
00:14:09.788   3202.327 -  3217.222:   99.6453%  (        8)
00:14:09.788   3217.222 -  3232.116:   99.6533%  (        7)
00:14:09.788   3232.116 -  3247.011:   99.6556%  (        2)
00:14:09.788   3247.011 -  3261.905:   99.6625%  (        6)
00:14:09.788   3261.905 -  3276.800:   99.6682%  (        5)
00:14:09.788   3276.800 -  3291.695:   99.6740%  (        5)
00:14:09.788   3291.695 -  3306.589:   99.6797%  (        5)
00:14:09.788   3306.589 -  3321.484:   99.6854%  (        5)
00:14:09.788   3321.484 -  3336.378:   99.6889%  (        3)
00:14:09.788   3336.378 -  3351.273:   99.6946%  (        5)
00:14:09.788   3351.273 -  3366.167:   99.6992%  (        4)
00:14:09.788   3366.167 -  3381.062:   99.7050%  (        5)
00:14:09.788   3381.062 -  3395.956:   99.7118%  (        6)
00:14:09.788   3395.956 -  3410.851:   99.7153%  (        3)
00:14:09.788   3410.851 -  3425.745:   99.7199%  (        4)
00:14:09.788   3425.745 -  3440.640:   99.7233%  (        3)
00:14:09.788   3440.640 -  3455.535:   99.7279%  (        4)
00:14:09.788   3455.535 -  3470.429:   99.7325%  (        4)
00:14:09.788   3470.429 -  3485.324:   99.7371%  (        4)
00:14:09.788   3485.324 -  3500.218:   99.7428%  (        5)
00:14:09.788   3500.218 -  3515.113:   99.7497%  (        6)
00:14:09.788   3515.113 -  3530.007:   99.7566%  (        6)
00:14:09.788   3530.007 -  3544.902:   99.7601%  (        3)
00:14:09.788   3544.902 -  3559.796:   99.7646%  (        4)
00:14:09.788   3559.796 -  3574.691:   99.7704%  (        5)
00:14:09.788   3574.691 -  3589.585:   99.7761%  (        5)
00:14:09.788   3589.585 -  3604.480:   99.7807%  (        4)
00:14:09.788   3604.480 -  3619.375:   99.7865%  (        5)
00:14:09.788   3619.375 -  3634.269:   99.7911%  (        4)
00:14:09.788   3634.269 -  3649.164:   99.7968%  (        5)
00:14:09.788   3649.164 -  3664.058:   99.8037%  (        6)
00:14:09.788   3664.058 -  3678.953:   99.8083%  (        4)
00:14:09.788   3678.953 -  3693.847:   99.8106%  (        2)
00:14:09.788   3693.847 -  3708.742:   99.8140%  (        3)
00:14:09.788   3708.742 -  3723.636:   99.8198%  (        5)
00:14:09.788   3723.636 -  3738.531:   99.8243%  (        4)
00:14:09.788   3738.531 -  3753.425:   99.8312%  (        6)
00:14:09.788   3753.425 -  3768.320:   99.8381%  (        6)
00:14:09.788   3768.320 -  3783.215:   99.8404%  (        2)
00:14:09.788   3783.215 -  3798.109:   99.8450%  (        4)
00:14:09.788   3798.109 -  3813.004:   99.8519%  (        6)
00:14:09.788   3813.004 -  3842.793:   99.8634%  (       10)
00:14:09.788   3842.793 -  3872.582:   99.8760%  (       11)
00:14:09.788   3872.582 -  3902.371:   99.8852%  (        8)
00:14:09.788   3902.371 -  3932.160:   99.8955%  (        9)
00:14:09.788   3932.160 -  3961.949:   99.9047%  (        8)
00:14:09.788   3961.949 -  3991.738:   99.9139%  (        8)
00:14:09.788   3991.738 -  4021.527:   99.9208%  (        6)
00:14:09.788   4021.527 -  4051.316:   99.9254%  (        4)
00:14:09.788   4051.316 -  4081.105:   99.9311%  (        5)
00:14:09.788   4081.105 -  4110.895:   99.9346%  (        3)
00:14:09.788   4110.895 -  4140.684:   99.9369%  (        2)
00:14:09.788   4140.684 -  4170.473:   99.9380%  (        1)
00:14:09.788   4170.473 -  4200.262:   99.9426%  (        4)
00:14:09.788   4200.262 -  4230.051:   99.9460%  (        3)
00:14:09.788   4230.051 -  4259.840:   99.9495%  (        3)
00:14:09.788   4259.840 -  4289.629:   99.9541%  (        4)
00:14:09.788   4289.629 -  4319.418:   99.9575%  (        3)
00:14:09.788   4319.418 -  4349.207:   99.9598%  (        2)
00:14:09.788   4349.207 -  4378.996:   99.9610%  (        1)
00:14:09.788   4378.996 -  4408.785:   99.9633%  (        2)
00:14:09.788   4408.785 -  4438.575:   99.9644%  (        1)
00:14:09.788   4438.575 -  4468.364:   99.9667%  (        2)
00:14:09.788   4468.364 -  4498.153:   99.9679%  (        1)
00:14:09.788   4498.153 -  4527.942:   99.9702%  (        2)
00:14:09.788   4527.942 -  4557.731:   99.9724%  (        2)
00:14:09.788   4557.731 -  4587.520:   99.9736%  (        1)
00:14:09.788   4587.520 -  4617.309:   99.9747%  (        1)
00:14:09.788   4617.309 -  4647.098:   99.9770%  (        2)
00:14:09.788   4647.098 -  4676.887:   99.9782%  (        1)
00:14:09.788   4676.887 -  4706.676:   99.9805%  (        2)
00:14:09.788   4706.676 -  4736.465:   99.9816%  (        1)
00:14:09.788   4736.465 -  4766.255:   99.9839%  (        2)
00:14:09.788   4796.044 -  4825.833:   99.9862%  (        2)
00:14:09.788   4825.833 -  4855.622:   99.9874%  (        1)
00:14:09.788   4855.622 -  4885.411:   99.9897%  (        2)
00:14:09.788   4885.411 -  4915.200:   99.9908%  (        1)
00:14:09.788   4915.200 -  4944.989:   99.9931%  (        2)
00:14:09.788   4944.989 -  4974.778:   99.9954%  (        2)
00:14:09.788   4974.778 -  5004.567:   99.9966%  (        1)
00:14:09.788   5004.567 -  5034.356:   99.9989%  (        2)
00:14:09.788   5064.145 -  5093.935:  100.0000%  (        1)
00:14:09.788  
00:14:09.788   05:59:30 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:14:11.165  Initializing NVMe Controllers
00:14:11.165  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:11.165  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:11.165  Initialization complete. Launching workers.
00:14:11.165  ========================================================
00:14:11.165                                                                             Latency(us)
00:14:11.166  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:11.166  PCIE (0000:00:10.0) NSID 1 from core  0:   90884.10    1065.05    1407.52     676.14    5133.25
00:14:11.166  ========================================================
00:14:11.166  Total                                  :   90884.10    1065.05    1407.52     676.14    5133.25
00:14:11.166  
00:14:11.166  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:11.166  =================================================================================
00:14:11.166    1.00000% :   878.778us
00:14:11.166   10.00000% :  1020.276us
00:14:11.166   25.00000% :  1139.433us
00:14:11.166   50.00000% :  1355.404us
00:14:11.166   75.00000% :  1638.400us
00:14:11.166   90.00000% :  1839.476us
00:14:11.166   95.00000% :  1966.080us
00:14:11.166   98.00000% :  2189.498us
00:14:11.166   99.00000% :  2383.127us
00:14:11.166   99.50000% :  2725.702us
00:14:11.166   99.90000% :  4021.527us
00:14:11.166   99.99000% :  4944.989us
00:14:11.166   99.99900% :  5153.513us
00:14:11.166   99.99990% :  5153.513us
00:14:11.166   99.99999% :  5153.513us
00:14:11.166  
00:14:11.166  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:11.166  ==============================================================================
00:14:11.166         Range in us     Cumulative    IO count
00:14:11.166    673.978 -   677.702:    0.0022%  (        2)
00:14:11.166    677.702 -   681.425:    0.0044%  (        2)
00:14:11.166    681.425 -   685.149:    0.0055%  (        1)
00:14:11.166    692.596 -   696.320:    0.0066%  (        1)
00:14:11.166    696.320 -   700.044:    0.0088%  (        2)
00:14:11.166    700.044 -   703.767:    0.0099%  (        1)
00:14:11.166    703.767 -   707.491:    0.0132%  (        3)
00:14:11.166    707.491 -   711.215:    0.0154%  (        2)
00:14:11.166    711.215 -   714.938:    0.0165%  (        1)
00:14:11.166    714.938 -   718.662:    0.0187%  (        2)
00:14:11.166    718.662 -   722.385:    0.0220%  (        3)
00:14:11.166    722.385 -   726.109:    0.0275%  (        5)
00:14:11.166    726.109 -   729.833:    0.0286%  (        1)
00:14:11.166    729.833 -   733.556:    0.0330%  (        4)
00:14:11.166    733.556 -   737.280:    0.0385%  (        5)
00:14:11.166    737.280 -   741.004:    0.0407%  (        2)
00:14:11.166    741.004 -   744.727:    0.0484%  (        7)
00:14:11.166    744.727 -   748.451:    0.0550%  (        6)
00:14:11.166    748.451 -   752.175:    0.0583%  (        3)
00:14:11.166    752.175 -   755.898:    0.0605%  (        2)
00:14:11.166    755.898 -   759.622:    0.0627%  (        2)
00:14:11.166    759.622 -   763.345:    0.0693%  (        6)
00:14:11.166    763.345 -   767.069:    0.0748%  (        5)
00:14:11.166    767.069 -   770.793:    0.0869%  (       11)
00:14:11.166    770.793 -   774.516:    0.1023%  (       14)
00:14:11.166    774.516 -   778.240:    0.1067%  (        4)
00:14:11.166    778.240 -   781.964:    0.1188%  (       11)
00:14:11.166    781.964 -   785.687:    0.1265%  (        7)
00:14:11.166    785.687 -   789.411:    0.1342%  (        7)
00:14:11.166    789.411 -   793.135:    0.1496%  (       14)
00:14:11.166    793.135 -   796.858:    0.1639%  (       13)
00:14:11.166    796.858 -   800.582:    0.1782%  (       13)
00:14:11.166    800.582 -   804.305:    0.1892%  (       10)
00:14:11.166    804.305 -   808.029:    0.2013%  (       11)
00:14:11.166    808.029 -   811.753:    0.2200%  (       17)
00:14:11.166    811.753 -   815.476:    0.2530%  (       30)
00:14:11.166    815.476 -   819.200:    0.2783%  (       23)
00:14:11.166    819.200 -   822.924:    0.3124%  (       31)
00:14:11.166    822.924 -   826.647:    0.3355%  (       21)
00:14:11.166    826.647 -   830.371:    0.3575%  (       20)
00:14:11.166    830.371 -   834.095:    0.3828%  (       23)
00:14:11.166    834.095 -   837.818:    0.4180%  (       32)
00:14:11.166    837.818 -   841.542:    0.4521%  (       31)
00:14:11.166    841.542 -   845.265:    0.4961%  (       40)
00:14:11.166    845.265 -   848.989:    0.5412%  (       41)
00:14:11.166    848.989 -   852.713:    0.5918%  (       46)
00:14:11.166    852.713 -   856.436:    0.6358%  (       40)
00:14:11.166    856.436 -   860.160:    0.6842%  (       44)
00:14:11.166    860.160 -   863.884:    0.7568%  (       66)
00:14:11.166    863.884 -   867.607:    0.8118%  (       50)
00:14:11.166    867.607 -   871.331:    0.8866%  (       68)
00:14:11.166    871.331 -   875.055:    0.9482%  (       56)
00:14:11.166    875.055 -   878.778:    1.0241%  (       69)
00:14:11.166    878.778 -   882.502:    1.1154%  (       83)
00:14:11.166    882.502 -   886.225:    1.2188%  (       94)
00:14:11.166    886.225 -   889.949:    1.3178%  (       90)
00:14:11.166    889.949 -   893.673:    1.4267%  (       99)
00:14:11.166    893.673 -   897.396:    1.5631%  (      124)
00:14:11.166    897.396 -   901.120:    1.6698%  (       97)
00:14:11.166    901.120 -   904.844:    1.8051%  (      123)
00:14:11.166    904.844 -   908.567:    1.9305%  (      114)
00:14:11.166    908.567 -   912.291:    2.0845%  (      140)
00:14:11.166    912.291 -   916.015:    2.2341%  (      136)
00:14:11.166    916.015 -   919.738:    2.3990%  (      150)
00:14:11.166    919.738 -   923.462:    2.5662%  (      152)
00:14:11.166    923.462 -   927.185:    2.7752%  (      190)
00:14:11.166    927.185 -   930.909:    2.9534%  (      162)
00:14:11.166    930.909 -   934.633:    3.1514%  (      180)
00:14:11.166    934.633 -   938.356:    3.3956%  (      222)
00:14:11.166    938.356 -   942.080:    3.5991%  (      185)
00:14:11.166    942.080 -   945.804:    3.8587%  (      236)
00:14:11.166    945.804 -   949.527:    4.0897%  (      210)
00:14:11.166    949.527 -   953.251:    4.3658%  (      251)
00:14:11.166    953.251 -   960.698:    4.8916%  (      478)
00:14:11.166    960.698 -   968.145:    5.4790%  (      534)
00:14:11.166    968.145 -   975.593:    6.0950%  (      560)
00:14:11.166    975.593 -   983.040:    6.7715%  (      615)
00:14:11.166    983.040 -   990.487:    7.4314%  (      600)
00:14:11.166    990.487 -   997.935:    8.1695%  (      671)
00:14:11.166    997.935 -  1005.382:    8.9406%  (      701)
00:14:11.166   1005.382 -  1012.829:    9.7040%  (      694)
00:14:11.166   1012.829 -  1020.276:   10.4784%  (      704)
00:14:11.166   1020.276 -  1027.724:   11.2946%  (      742)
00:14:11.166   1027.724 -  1035.171:   12.1569%  (      784)
00:14:11.166   1035.171 -  1042.618:   13.0171%  (      782)
00:14:11.166   1042.618 -  1050.065:   13.9433%  (      842)
00:14:11.166   1050.065 -  1057.513:   14.8409%  (      816)
00:14:11.166   1057.513 -  1064.960:   15.7385%  (      816)
00:14:11.166   1064.960 -  1072.407:   16.6383%  (      818)
00:14:11.166   1072.407 -  1079.855:   17.5578%  (      836)
00:14:11.166   1079.855 -  1087.302:   18.5445%  (      897)
00:14:11.166   1087.302 -  1094.749:   19.4630%  (      835)
00:14:11.166   1094.749 -  1102.196:   20.4607%  (      907)
00:14:11.166   1102.196 -  1109.644:   21.3517%  (      810)
00:14:11.166   1109.644 -  1117.091:   22.3174%  (      878)
00:14:11.166   1117.091 -  1124.538:   23.3008%  (      894)
00:14:11.166   1124.538 -  1131.985:   24.2545%  (      867)
00:14:11.166   1131.985 -  1139.433:   25.2181%  (      876)
00:14:11.166   1139.433 -  1146.880:   26.1124%  (      813)
00:14:11.166   1146.880 -  1154.327:   27.0979%  (      896)
00:14:11.166   1154.327 -  1161.775:   28.0373%  (      854)
00:14:11.166   1161.775 -  1169.222:   28.9184%  (      801)
00:14:11.166   1169.222 -  1176.669:   29.8787%  (      873)
00:14:11.166   1176.669 -  1184.116:   30.8104%  (      847)
00:14:11.166   1184.116 -  1191.564:   31.7486%  (      853)
00:14:11.166   1191.564 -  1199.011:   32.6781%  (      845)
00:14:11.166   1199.011 -  1206.458:   33.5801%  (      820)
00:14:11.166   1206.458 -  1213.905:   34.5173%  (      852)
00:14:11.166   1213.905 -  1221.353:   35.4039%  (      806)
00:14:11.166   1221.353 -  1228.800:   36.3245%  (      837)
00:14:11.166   1228.800 -  1236.247:   37.2023%  (      798)
00:14:11.166   1236.247 -  1243.695:   38.0955%  (      812)
00:14:11.166   1243.695 -  1251.142:   38.9491%  (      776)
00:14:11.166   1251.142 -  1258.589:   39.8456%  (      815)
00:14:11.166   1258.589 -  1266.036:   40.7332%  (      807)
00:14:11.166   1266.036 -  1273.484:   41.6088%  (      796)
00:14:11.166   1273.484 -  1280.931:   42.4734%  (      786)
00:14:11.166   1280.931 -  1288.378:   43.3435%  (      791)
00:14:11.166   1288.378 -  1295.825:   44.2356%  (      811)
00:14:11.166   1295.825 -  1303.273:   45.0441%  (      735)
00:14:11.166   1303.273 -  1310.720:   45.8987%  (      777)
00:14:11.166   1310.720 -  1318.167:   46.7391%  (      764)
00:14:11.166   1318.167 -  1325.615:   47.5102%  (      701)
00:14:11.166   1325.615 -  1333.062:   48.3187%  (      735)
00:14:11.166   1333.062 -  1340.509:   49.0755%  (      688)
00:14:11.166   1340.509 -  1347.956:   49.8510%  (      705)
00:14:11.166   1347.956 -  1355.404:   50.5890%  (      671)
00:14:11.166   1355.404 -  1362.851:   51.3161%  (      661)
00:14:11.166   1362.851 -  1370.298:   52.0256%  (      645)
00:14:11.166   1370.298 -  1377.745:   52.7538%  (      662)
00:14:11.166   1377.745 -  1385.193:   53.4974%  (      676)
00:14:11.166   1385.193 -  1392.640:   54.2014%  (      640)
00:14:11.166   1392.640 -  1400.087:   54.9449%  (      676)
00:14:11.166   1400.087 -  1407.535:   55.6335%  (      626)
00:14:11.166   1407.535 -  1414.982:   56.3232%  (      627)
00:14:11.166   1414.982 -  1422.429:   56.9722%  (      590)
00:14:11.166   1422.429 -  1429.876:   57.6850%  (      648)
00:14:11.166   1429.876 -  1437.324:   58.3439%  (      599)
00:14:11.166   1437.324 -  1444.771:   59.0292%  (      623)
00:14:11.166   1444.771 -  1452.218:   59.6946%  (      605)
00:14:11.166   1452.218 -  1459.665:   60.2985%  (      549)
00:14:11.166   1459.665 -  1467.113:   60.9541%  (      596)
00:14:11.166   1467.113 -  1474.560:   61.5800%  (      569)
00:14:11.166   1474.560 -  1482.007:   62.2059%  (      569)
00:14:11.166   1482.007 -  1489.455:   62.8527%  (      588)
00:14:11.166   1489.455 -  1496.902:   63.4984%  (      587)
00:14:11.166   1496.902 -  1504.349:   64.1232%  (      568)
00:14:11.166   1504.349 -  1511.796:   64.7556%  (      575)
00:14:11.166   1511.796 -  1519.244:   65.3969%  (      583)
00:14:11.166   1519.244 -  1526.691:   65.9920%  (      541)
00:14:11.166   1526.691 -  1534.138:   66.5992%  (      552)
00:14:11.166   1534.138 -  1541.585:   67.2218%  (      566)
00:14:11.166   1541.585 -  1549.033:   67.8620%  (      582)
00:14:11.166   1549.033 -  1556.480:   68.4560%  (      540)
00:14:11.166   1556.480 -  1563.927:   69.0950%  (      581)
00:14:11.166   1563.927 -  1571.375:   69.6681%  (      521)
00:14:11.167   1571.375 -  1578.822:   70.2874%  (      563)
00:14:11.167   1578.822 -  1586.269:   70.8748%  (      534)
00:14:11.167   1586.269 -  1593.716:   71.5172%  (      584)
00:14:11.167   1593.716 -  1601.164:   72.1046%  (      534)
00:14:11.167   1601.164 -  1608.611:   72.6865%  (      529)
00:14:11.167   1608.611 -  1616.058:   73.2838%  (      543)
00:14:11.167   1616.058 -  1623.505:   73.8755%  (      538)
00:14:11.167   1623.505 -  1630.953:   74.4695%  (      540)
00:14:11.167   1630.953 -  1638.400:   75.0426%  (      521)
00:14:11.167   1638.400 -  1645.847:   75.6432%  (      546)
00:14:11.167   1645.847 -  1653.295:   76.2625%  (      563)
00:14:11.167   1653.295 -  1660.742:   76.8466%  (      531)
00:14:11.167   1660.742 -  1668.189:   77.4010%  (      504)
00:14:11.167   1668.189 -  1675.636:   78.0082%  (      552)
00:14:11.167   1675.636 -  1683.084:   78.5659%  (      507)
00:14:11.167   1683.084 -  1690.531:   79.1587%  (      539)
00:14:11.167   1690.531 -  1697.978:   79.7516%  (      539)
00:14:11.167   1697.978 -  1705.425:   80.3434%  (      538)
00:14:11.167   1705.425 -  1712.873:   80.9099%  (      515)
00:14:11.167   1712.873 -  1720.320:   81.5050%  (      541)
00:14:11.167   1720.320 -  1727.767:   82.1056%  (      546)
00:14:11.167   1727.767 -  1735.215:   82.6567%  (      501)
00:14:11.167   1735.215 -  1742.662:   83.2386%  (      529)
00:14:11.167   1742.662 -  1750.109:   83.8171%  (      526)
00:14:11.167   1750.109 -  1757.556:   84.3781%  (      510)
00:14:11.167   1757.556 -  1765.004:   84.9611%  (      530)
00:14:11.167   1765.004 -  1772.451:   85.5045%  (      494)
00:14:11.167   1772.451 -  1779.898:   86.0655%  (      510)
00:14:11.167   1779.898 -  1787.345:   86.6034%  (      489)
00:14:11.167   1787.345 -  1794.793:   87.1281%  (      477)
00:14:11.167   1794.793 -  1802.240:   87.6627%  (      486)
00:14:11.167   1802.240 -  1809.687:   88.1356%  (      430)
00:14:11.167   1809.687 -  1817.135:   88.6218%  (      442)
00:14:11.167   1817.135 -  1824.582:   89.1102%  (      444)
00:14:11.167   1824.582 -  1832.029:   89.6063%  (      451)
00:14:11.167   1832.029 -  1839.476:   90.0441%  (      398)
00:14:11.167   1839.476 -  1846.924:   90.4973%  (      412)
00:14:11.167   1846.924 -  1854.371:   90.8999%  (      366)
00:14:11.167   1854.371 -  1861.818:   91.3113%  (      374)
00:14:11.167   1861.818 -  1869.265:   91.7194%  (      371)
00:14:11.167   1869.265 -  1876.713:   92.0857%  (      333)
00:14:11.167   1876.713 -  1884.160:   92.4113%  (      296)
00:14:11.167   1884.160 -  1891.607:   92.7325%  (      292)
00:14:11.167   1891.607 -  1899.055:   93.0217%  (      263)
00:14:11.167   1899.055 -  1906.502:   93.3044%  (      257)
00:14:11.167   1906.502 -  1921.396:   93.8291%  (      477)
00:14:11.167   1921.396 -  1936.291:   94.2724%  (      403)
00:14:11.167   1936.291 -  1951.185:   94.6728%  (      364)
00:14:11.167   1951.185 -  1966.080:   95.0446%  (      338)
00:14:11.167   1966.080 -  1980.975:   95.3779%  (      303)
00:14:11.167   1980.975 -  1995.869:   95.6848%  (      279)
00:14:11.167   1995.869 -  2010.764:   95.9752%  (      264)
00:14:11.167   2010.764 -  2025.658:   96.2414%  (      242)
00:14:11.167   2025.658 -  2040.553:   96.4548%  (      194)
00:14:11.167   2040.553 -  2055.447:   96.6682%  (      194)
00:14:11.167   2055.447 -  2070.342:   96.8563%  (      171)
00:14:11.167   2070.342 -  2085.236:   97.0345%  (      162)
00:14:11.167   2085.236 -  2100.131:   97.2083%  (      158)
00:14:11.167   2100.131 -  2115.025:   97.3590%  (      137)
00:14:11.167   2115.025 -  2129.920:   97.5141%  (      141)
00:14:11.167   2129.920 -  2144.815:   97.6449%  (      119)
00:14:11.167   2144.815 -  2159.709:   97.7791%  (      122)
00:14:11.167   2159.709 -  2174.604:   97.9078%  (      117)
00:14:11.167   2174.604 -  2189.498:   98.0387%  (      119)
00:14:11.167   2189.498 -  2204.393:   98.1388%  (       91)
00:14:11.167   2204.393 -  2219.287:   98.2422%  (       94)
00:14:11.167   2219.287 -  2234.182:   98.3335%  (       83)
00:14:11.167   2234.182 -  2249.076:   98.4281%  (       86)
00:14:11.167   2249.076 -  2263.971:   98.5139%  (       78)
00:14:11.167   2263.971 -  2278.865:   98.5942%  (       73)
00:14:11.167   2278.865 -  2293.760:   98.6712%  (       70)
00:14:11.167   2293.760 -  2308.655:   98.7383%  (       61)
00:14:11.167   2308.655 -  2323.549:   98.8065%  (       62)
00:14:11.167   2323.549 -  2338.444:   98.8714%  (       59)
00:14:11.167   2338.444 -  2353.338:   98.9341%  (       57)
00:14:11.167   2353.338 -  2368.233:   98.9858%  (       47)
00:14:11.167   2368.233 -  2383.127:   99.0375%  (       47)
00:14:11.167   2383.127 -  2398.022:   99.0749%  (       34)
00:14:11.167   2398.022 -  2412.916:   99.1134%  (       35)
00:14:11.167   2412.916 -  2427.811:   99.1431%  (       27)
00:14:11.167   2427.811 -  2442.705:   99.1761%  (       30)
00:14:11.167   2442.705 -  2457.600:   99.2036%  (       25)
00:14:11.167   2457.600 -  2472.495:   99.2267%  (       21)
00:14:11.167   2472.495 -  2487.389:   99.2498%  (       21)
00:14:11.167   2487.389 -  2502.284:   99.2773%  (       25)
00:14:11.167   2502.284 -  2517.178:   99.2949%  (       16)
00:14:11.167   2517.178 -  2532.073:   99.3191%  (       22)
00:14:11.167   2532.073 -  2546.967:   99.3422%  (       21)
00:14:11.167   2546.967 -  2561.862:   99.3642%  (       20)
00:14:11.167   2561.862 -  2576.756:   99.3829%  (       17)
00:14:11.167   2576.756 -  2591.651:   99.3928%  (        9)
00:14:11.167   2591.651 -  2606.545:   99.4071%  (       13)
00:14:11.167   2606.545 -  2621.440:   99.4170%  (        9)
00:14:11.167   2621.440 -  2636.335:   99.4313%  (       13)
00:14:11.167   2636.335 -  2651.229:   99.4456%  (       13)
00:14:11.167   2651.229 -  2666.124:   99.4577%  (       11)
00:14:11.167   2666.124 -  2681.018:   99.4698%  (       11)
00:14:11.167   2681.018 -  2695.913:   99.4830%  (       12)
00:14:11.167   2695.913 -  2710.807:   99.4929%  (        9)
00:14:11.167   2710.807 -  2725.702:   99.5050%  (       11)
00:14:11.167   2725.702 -  2740.596:   99.5116%  (        6)
00:14:11.167   2740.596 -  2755.491:   99.5215%  (        9)
00:14:11.167   2755.491 -  2770.385:   99.5292%  (        7)
00:14:11.167   2770.385 -  2785.280:   99.5369%  (        7)
00:14:11.167   2785.280 -  2800.175:   99.5468%  (        9)
00:14:11.167   2800.175 -  2815.069:   99.5545%  (        7)
00:14:11.167   2815.069 -  2829.964:   99.5622%  (        7)
00:14:11.167   2829.964 -  2844.858:   99.5677%  (        5)
00:14:11.167   2844.858 -  2859.753:   99.5754%  (        7)
00:14:11.167   2859.753 -  2874.647:   99.5864%  (       10)
00:14:11.167   2874.647 -  2889.542:   99.5952%  (        8)
00:14:11.167   2889.542 -  2904.436:   99.6018%  (        6)
00:14:11.167   2904.436 -  2919.331:   99.6106%  (        8)
00:14:11.167   2919.331 -  2934.225:   99.6205%  (        9)
00:14:11.167   2934.225 -  2949.120:   99.6271%  (        6)
00:14:11.167   2949.120 -  2964.015:   99.6315%  (        4)
00:14:11.167   2964.015 -  2978.909:   99.6392%  (        7)
00:14:11.167   2978.909 -  2993.804:   99.6458%  (        6)
00:14:11.167   2993.804 -  3008.698:   99.6524%  (        6)
00:14:11.167   3008.698 -  3023.593:   99.6612%  (        8)
00:14:11.167   3023.593 -  3038.487:   99.6667%  (        5)
00:14:11.167   3038.487 -  3053.382:   99.6733%  (        6)
00:14:11.167   3053.382 -  3068.276:   99.6821%  (        8)
00:14:11.167   3068.276 -  3083.171:   99.6887%  (        6)
00:14:11.167   3083.171 -  3098.065:   99.6942%  (        5)
00:14:11.167   3098.065 -  3112.960:   99.7019%  (        7)
00:14:11.167   3112.960 -  3127.855:   99.7074%  (        5)
00:14:11.167   3127.855 -  3142.749:   99.7140%  (        6)
00:14:11.167   3142.749 -  3157.644:   99.7195%  (        5)
00:14:11.167   3157.644 -  3172.538:   99.7283%  (        8)
00:14:11.167   3172.538 -  3187.433:   99.7338%  (        5)
00:14:11.167   3187.433 -  3202.327:   99.7437%  (        9)
00:14:11.167   3202.327 -  3217.222:   99.7492%  (        5)
00:14:11.167   3217.222 -  3232.116:   99.7558%  (        6)
00:14:11.167   3232.116 -  3247.011:   99.7613%  (        5)
00:14:11.167   3247.011 -  3261.905:   99.7657%  (        4)
00:14:11.167   3261.905 -  3276.800:   99.7723%  (        6)
00:14:11.167   3276.800 -  3291.695:   99.7767%  (        4)
00:14:11.167   3291.695 -  3306.589:   99.7833%  (        6)
00:14:11.167   3306.589 -  3321.484:   99.7877%  (        4)
00:14:11.167   3321.484 -  3336.378:   99.7910%  (        3)
00:14:11.167   3336.378 -  3351.273:   99.7943%  (        3)
00:14:11.167   3351.273 -  3366.167:   99.7987%  (        4)
00:14:11.167   3366.167 -  3381.062:   99.8009%  (        2)
00:14:11.167   3381.062 -  3395.956:   99.8053%  (        4)
00:14:11.167   3395.956 -  3410.851:   99.8097%  (        4)
00:14:11.167   3410.851 -  3425.745:   99.8141%  (        4)
00:14:11.167   3425.745 -  3440.640:   99.8174%  (        3)
00:14:11.167   3440.640 -  3455.535:   99.8196%  (        2)
00:14:11.167   3455.535 -  3470.429:   99.8240%  (        4)
00:14:11.167   3470.429 -  3485.324:   99.8284%  (        4)
00:14:11.167   3485.324 -  3500.218:   99.8317%  (        3)
00:14:11.167   3500.218 -  3515.113:   99.8350%  (        3)
00:14:11.167   3515.113 -  3530.007:   99.8372%  (        2)
00:14:11.167   3530.007 -  3544.902:   99.8394%  (        2)
00:14:11.167   3544.902 -  3559.796:   99.8405%  (        1)
00:14:11.167   3559.796 -  3574.691:   99.8427%  (        2)
00:14:11.167   3574.691 -  3589.585:   99.8460%  (        3)
00:14:11.167   3589.585 -  3604.480:   99.8471%  (        1)
00:14:11.167   3604.480 -  3619.375:   99.8504%  (        3)
00:14:11.167   3619.375 -  3634.269:   99.8515%  (        1)
00:14:11.167   3634.269 -  3649.164:   99.8526%  (        1)
00:14:11.167   3649.164 -  3664.058:   99.8548%  (        2)
00:14:11.167   3664.058 -  3678.953:   99.8559%  (        1)
00:14:11.167   3678.953 -  3693.847:   99.8581%  (        2)
00:14:11.167   3693.847 -  3708.742:   99.8614%  (        3)
00:14:11.167   3723.636 -  3738.531:   99.8636%  (        2)
00:14:11.167   3738.531 -  3753.425:   99.8647%  (        1)
00:14:11.167   3753.425 -  3768.320:   99.8680%  (        3)
00:14:11.167   3768.320 -  3783.215:   99.8702%  (        2)
00:14:11.167   3783.215 -  3798.109:   99.8724%  (        2)
00:14:11.167   3813.004 -  3842.793:   99.8768%  (        4)
00:14:11.167   3842.793 -  3872.582:   99.8823%  (        5)
00:14:11.167   3872.582 -  3902.371:   99.8867%  (        4)
00:14:11.167   3902.371 -  3932.160:   99.8911%  (        4)
00:14:11.167   3932.160 -  3961.949:   99.8944%  (        3)
00:14:11.167   3961.949 -  3991.738:   99.8988%  (        4)
00:14:11.167   3991.738 -  4021.527:   99.9032%  (        4)
00:14:11.167   4021.527 -  4051.316:   99.9065%  (        3)
00:14:11.167   4051.316 -  4081.105:   99.9098%  (        3)
00:14:11.167   4081.105 -  4110.895:   99.9120%  (        2)
00:14:11.167   4110.895 -  4140.684:   99.9153%  (        3)
00:14:11.167   4140.684 -  4170.473:   99.9175%  (        2)
00:14:11.167   4170.473 -  4200.262:   99.9208%  (        3)
00:14:11.167   4200.262 -  4230.051:   99.9241%  (        3)
00:14:11.167   4230.051 -  4259.840:   99.9252%  (        1)
00:14:11.168   4259.840 -  4289.629:   99.9285%  (        3)
00:14:11.168   4289.629 -  4319.418:   99.9318%  (        3)
00:14:11.168   4319.418 -  4349.207:   99.9340%  (        2)
00:14:11.168   4349.207 -  4378.996:   99.9373%  (        3)
00:14:11.168   4378.996 -  4408.785:   99.9417%  (        4)
00:14:11.168   4408.785 -  4438.575:   99.9439%  (        2)
00:14:11.168   4438.575 -  4468.364:   99.9461%  (        2)
00:14:11.168   4468.364 -  4498.153:   99.9494%  (        3)
00:14:11.168   4498.153 -  4527.942:   99.9516%  (        2)
00:14:11.168   4527.942 -  4557.731:   99.9560%  (        4)
00:14:11.168   4557.731 -  4587.520:   99.9571%  (        1)
00:14:11.168   4587.520 -  4617.309:   99.9604%  (        3)
00:14:11.168   4617.309 -  4647.098:   99.9637%  (        3)
00:14:11.168   4647.098 -  4676.887:   99.9659%  (        2)
00:14:11.168   4676.887 -  4706.676:   99.9670%  (        1)
00:14:11.168   4706.676 -  4736.465:   99.9692%  (        2)
00:14:11.168   4736.465 -  4766.255:   99.9736%  (        4)
00:14:11.168   4766.255 -  4796.044:   99.9758%  (        2)
00:14:11.168   4796.044 -  4825.833:   99.9791%  (        3)
00:14:11.168   4825.833 -  4855.622:   99.9813%  (        2)
00:14:11.168   4855.622 -  4885.411:   99.9835%  (        2)
00:14:11.168   4885.411 -  4915.200:   99.9868%  (        3)
00:14:11.168   4915.200 -  4944.989:   99.9901%  (        3)
00:14:11.168   4944.989 -  4974.778:   99.9912%  (        1)
00:14:11.168   4974.778 -  5004.567:   99.9934%  (        2)
00:14:11.168   5004.567 -  5034.356:   99.9956%  (        2)
00:14:11.168   5034.356 -  5064.145:   99.9967%  (        1)
00:14:11.168   5064.145 -  5093.935:   99.9978%  (        1)
00:14:11.168   5093.935 -  5123.724:   99.9989%  (        1)
00:14:11.168   5123.724 -  5153.513:  100.0000%  (        1)
00:14:11.168  
00:14:11.168   05:59:31 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:14:11.168  
00:14:11.168  real	0m2.584s
00:14:11.168  user	0m2.228s
00:14:11.168  sys	0m0.267s
00:14:11.168   05:59:31 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:11.168   05:59:31 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:14:11.168  ************************************
00:14:11.168  END TEST nvme_perf
00:14:11.168  ************************************
00:14:11.168   05:59:31 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:14:11.168   05:59:31 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:14:11.168   05:59:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:11.168   05:59:31 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:11.168  ************************************
00:14:11.168  START TEST nvme_hello_world
00:14:11.168  ************************************
00:14:11.168   05:59:31 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:14:11.168  Initializing NVMe Controllers
00:14:11.168  Attached to 0000:00:10.0
00:14:11.168    Namespace ID: 1 size: 5GB
00:14:11.168  Initialization complete.
00:14:11.168  INFO: using host memory buffer for IO
00:14:11.168  Hello world!
00:14:11.427  
00:14:11.427  real	0m0.250s
00:14:11.427  user	0m0.093s
00:14:11.427  sys	0m0.117s
00:14:11.427   05:59:32 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:11.427   05:59:32 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:14:11.427  ************************************
00:14:11.427  END TEST nvme_hello_world
00:14:11.427  ************************************
00:14:11.427   05:59:32 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:14:11.427   05:59:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:11.427   05:59:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:11.427   05:59:32 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:11.427  ************************************
00:14:11.427  START TEST nvme_sgl
00:14:11.427  ************************************
00:14:11.427   05:59:32 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:14:11.686  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:14:11.686  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:14:11.686  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:14:11.686  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:14:11.686  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:14:11.686  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:14:11.686  NVMe Readv/Writev Request test
00:14:11.686  Attached to 0000:00:10.0
00:14:11.686  0000:00:10.0: build_io_request_2 test passed
00:14:11.686  0000:00:10.0: build_io_request_4 test passed
00:14:11.686  0000:00:10.0: build_io_request_5 test passed
00:14:11.686  0000:00:10.0: build_io_request_6 test passed
00:14:11.686  0000:00:10.0: build_io_request_7 test passed
00:14:11.686  0000:00:10.0: build_io_request_10 test passed
00:14:11.686  Cleaning up...
00:14:11.686  
00:14:11.686  real	0m0.291s
00:14:11.686  user	0m0.141s
00:14:11.686  sys	0m0.104s
00:14:11.686   05:59:32 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:11.686   05:59:32 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:14:11.686  ************************************
00:14:11.686  END TEST nvme_sgl
00:14:11.686  ************************************
00:14:11.686   05:59:32 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:14:11.686   05:59:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:11.686   05:59:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:11.686   05:59:32 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:11.686  ************************************
00:14:11.686  START TEST nvme_e2edp
00:14:11.686  ************************************
00:14:11.686   05:59:32 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:14:11.945  NVMe Write/Read with End-to-End data protection test
00:14:11.945  Attached to 0000:00:10.0
00:14:11.945  Cleaning up...
00:14:11.945  
00:14:11.945  real	0m0.291s
00:14:11.945  user	0m0.103s
00:14:11.945  sys	0m0.133s
00:14:11.945   05:59:32 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:11.945   05:59:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:14:11.945  ************************************
00:14:11.945  END TEST nvme_e2edp
00:14:11.945  ************************************
00:14:11.945   05:59:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:14:11.945   05:59:32 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:11.945   05:59:32 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:11.945   05:59:32 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:11.945  ************************************
00:14:11.945  START TEST nvme_reserve
00:14:11.945  ************************************
00:14:11.945   05:59:32 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:14:12.205  =====================================================
00:14:12.205  NVMe Controller at PCI bus 0, device 16, function 0
00:14:12.205  =====================================================
00:14:12.205  Reservations:                Not Supported
00:14:12.205  Reservation test passed
00:14:12.205  
00:14:12.205  real	0m0.292s
00:14:12.205  user	0m0.114s
00:14:12.205  sys	0m0.127s
00:14:12.205   05:59:33 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:12.205   05:59:33 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:14:12.205  ************************************
00:14:12.205  END TEST nvme_reserve
00:14:12.205  ************************************
00:14:12.464   05:59:33 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:14:12.464   05:59:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:12.464   05:59:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:12.464   05:59:33 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:12.464  ************************************
00:14:12.464  START TEST nvme_err_injection
00:14:12.464  ************************************
00:14:12.464   05:59:33 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:14:12.723  NVMe Error Injection test
00:14:12.723  Attached to 0000:00:10.0
00:14:12.723  0000:00:10.0: get features failed as expected
00:14:12.723  0000:00:10.0: get features successfully as expected
00:14:12.723  0000:00:10.0: read failed as expected
00:14:12.723  0000:00:10.0: read successfully as expected
00:14:12.723  Cleaning up...
00:14:12.723  
00:14:12.723  real	0m0.301s
00:14:12.723  user	0m0.111s
00:14:12.723  sys	0m0.133s
00:14:12.723   05:59:33 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:12.723   05:59:33 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:14:12.723  ************************************
00:14:12.723  END TEST nvme_err_injection
00:14:12.723  ************************************
00:14:12.723   05:59:33 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:14:12.723   05:59:33 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:14:12.723   05:59:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:12.723   05:59:33 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:12.723  ************************************
00:14:12.723  START TEST nvme_overhead
00:14:12.723  ************************************
00:14:12.723   05:59:33 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:14:14.103  Initializing NVMe Controllers
00:14:14.103  Attached to 0000:00:10.0
00:14:14.103  Initialization complete. Launching workers.
00:14:14.103  submit (in ns)   avg, min, max =  17381.6,  13326.8, 226863.6
00:14:14.103  complete (in ns) avg, min, max =  11684.9,   8840.9, 105350.9
00:14:14.103  
00:14:14.103  Submit histogram
00:14:14.103  ================
00:14:14.103         Range in us     Cumulative     Count
00:14:14.103     13.324 -    13.382:    0.0110%  (        1)
00:14:14.103     13.789 -    13.847:    0.0329%  (        2)
00:14:14.103     13.847 -    13.905:    0.0548%  (        2)
00:14:14.103     13.905 -    13.964:    0.1096%  (        5)
00:14:14.103     13.964 -    14.022:    0.1534%  (        4)
00:14:14.103     14.022 -    14.080:    0.2630%  (       10)
00:14:14.103     14.080 -    14.138:    0.4821%  (       20)
00:14:14.103     14.138 -    14.196:    1.1723%  (       63)
00:14:14.103     14.196 -    14.255:    2.6734%  (      137)
00:14:14.103     14.255 -    14.313:    4.6675%  (      182)
00:14:14.103     14.313 -    14.371:    7.2313%  (      234)
00:14:14.103     14.371 -    14.429:   10.6169%  (      309)
00:14:14.103     14.429 -    14.487:   13.4108%  (      255)
00:14:14.103     14.487 -    14.545:   16.0951%  (      245)
00:14:14.103     14.545 -    14.604:   18.7466%  (      242)
00:14:14.103     14.604 -    14.662:   21.0146%  (      207)
00:14:14.103     14.662 -    14.720:   23.6660%  (      242)
00:14:14.103     14.720 -    14.778:   26.9640%  (      301)
00:14:14.103     14.778 -    14.836:   30.6563%  (      337)
00:14:14.103     14.836 -    14.895:   33.7460%  (      282)
00:14:14.103     14.895 -    15.011:   39.6297%  (      537)
00:14:14.103     15.011 -    15.127:   43.7384%  (      375)
00:14:14.103     15.127 -    15.244:   48.3510%  (      421)
00:14:14.103     15.244 -    15.360:   52.4159%  (      371)
00:14:14.103     15.360 -    15.476:   55.5495%  (      286)
00:14:14.103     15.476 -    15.593:   58.0037%  (      224)
00:14:14.103     15.593 -    15.709:   59.4062%  (      128)
00:14:14.103     15.709 -    15.825:   60.3594%  (       87)
00:14:14.103     15.825 -    15.942:   61.1482%  (       72)
00:14:14.103     15.942 -    16.058:   61.7289%  (       53)
00:14:14.103     16.058 -    16.175:   62.1891%  (       42)
00:14:14.103     16.175 -    16.291:   62.5726%  (       35)
00:14:14.103     16.291 -    16.407:   62.8465%  (       25)
00:14:14.103     16.407 -    16.524:   63.1204%  (       25)
00:14:14.103     16.524 -    16.640:   63.2190%  (        9)
00:14:14.103     16.640 -    16.756:   63.3395%  (       11)
00:14:14.103     16.756 -    16.873:   63.3943%  (        5)
00:14:14.103     16.873 -    16.989:   63.4382%  (        4)
00:14:14.103     16.989 -    17.105:   63.4929%  (        5)
00:14:14.103     17.105 -    17.222:   63.5258%  (        3)
00:14:14.103     17.222 -    17.338:   63.5696%  (        4)
00:14:14.103     17.571 -    17.687:   63.6244%  (        5)
00:14:14.103     17.687 -    17.804:   64.2708%  (       59)
00:14:14.103     17.804 -    17.920:   66.6484%  (      217)
00:14:14.103     17.920 -    18.036:   71.3597%  (      430)
00:14:14.103     18.036 -    18.153:   76.0710%  (      430)
00:14:14.103     18.153 -    18.269:   79.1279%  (      279)
00:14:14.103     18.269 -    18.385:   80.7166%  (      145)
00:14:14.103     18.385 -    18.502:   81.6479%  (       85)
00:14:14.103     18.502 -    18.618:   82.5134%  (       79)
00:14:14.103     18.618 -    18.735:   83.4338%  (       84)
00:14:14.103     18.735 -    18.851:   84.3212%  (       81)
00:14:14.103     18.851 -    18.967:   85.1211%  (       73)
00:14:14.103     18.967 -    19.084:   85.6032%  (       44)
00:14:14.103     19.084 -    19.200:   86.0962%  (       45)
00:14:14.103     19.200 -    19.316:   86.4249%  (       30)
00:14:14.103     19.316 -    19.433:   86.6002%  (       16)
00:14:14.103     19.433 -    19.549:   86.7645%  (       15)
00:14:14.103     19.549 -    19.665:   86.9618%  (       18)
00:14:14.103     19.665 -    19.782:   87.1809%  (       20)
00:14:14.103     19.782 -    19.898:   87.3891%  (       19)
00:14:14.103     19.898 -    20.015:   87.5096%  (       11)
00:14:14.103     20.015 -    20.131:   87.7068%  (       18)
00:14:14.103     20.131 -    20.247:   87.8383%  (       12)
00:14:14.103     20.247 -    20.364:   87.9588%  (       11)
00:14:14.103     20.364 -    20.480:   88.1560%  (       18)
00:14:14.103     20.480 -    20.596:   88.2765%  (       11)
00:14:14.103     20.596 -    20.713:   88.3532%  (        7)
00:14:14.103     20.713 -    20.829:   88.4628%  (       10)
00:14:14.103     20.829 -    20.945:   88.5943%  (       12)
00:14:14.103     20.945 -    21.062:   88.7805%  (       17)
00:14:14.103     21.062 -    21.178:   88.8791%  (        9)
00:14:14.103     21.178 -    21.295:   88.9997%  (       11)
00:14:14.103     21.295 -    21.411:   89.1092%  (       10)
00:14:14.103     21.411 -    21.527:   89.2078%  (        9)
00:14:14.103     21.527 -    21.644:   89.3065%  (        9)
00:14:14.103     21.644 -    21.760:   89.4051%  (        9)
00:14:14.103     21.760 -    21.876:   89.5256%  (       11)
00:14:14.103     21.876 -    21.993:   89.6132%  (        8)
00:14:14.103     21.993 -    22.109:   89.7228%  (       10)
00:14:14.103     22.109 -    22.225:   89.8214%  (        9)
00:14:14.103     22.225 -    22.342:   89.9419%  (       11)
00:14:14.103     22.342 -    22.458:   89.9967%  (        5)
00:14:14.103     22.458 -    22.575:   90.0844%  (        8)
00:14:14.103     22.575 -    22.691:   90.1939%  (       10)
00:14:14.103     22.691 -    22.807:   90.2706%  (        7)
00:14:14.103     22.807 -    22.924:   90.4021%  (       12)
00:14:14.103     22.924 -    23.040:   90.4898%  (        8)
00:14:14.103     23.040 -    23.156:   90.5774%  (        8)
00:14:14.103     23.156 -    23.273:   90.6322%  (        5)
00:14:14.103     23.273 -    23.389:   90.6870%  (        5)
00:14:14.103     23.389 -    23.505:   90.7308%  (        4)
00:14:14.103     23.505 -    23.622:   90.8513%  (       11)
00:14:14.103     23.622 -    23.738:   90.8842%  (        3)
00:14:14.103     23.738 -    23.855:   90.9390%  (        5)
00:14:14.103     23.855 -    23.971:   91.0157%  (        7)
00:14:14.103     23.971 -    24.087:   91.0705%  (        5)
00:14:14.103     24.087 -    24.204:   91.1581%  (        8)
00:14:14.103     24.204 -    24.320:   91.1910%  (        3)
00:14:14.103     24.320 -    24.436:   91.2677%  (        7)
00:14:14.103     24.436 -    24.553:   91.3334%  (        6)
00:14:14.103     24.553 -    24.669:   91.3772%  (        4)
00:14:14.103     24.669 -    24.785:   91.4539%  (        7)
00:14:14.103     24.785 -    24.902:   91.5197%  (        6)
00:14:14.103     24.902 -    25.018:   91.5416%  (        2)
00:14:14.103     25.018 -    25.135:   91.5744%  (        3)
00:14:14.103     25.135 -    25.251:   91.6511%  (        7)
00:14:14.103     25.251 -    25.367:   91.7278%  (        7)
00:14:14.103     25.367 -    25.484:   91.8155%  (        8)
00:14:14.103     25.484 -    25.600:   91.8593%  (        4)
00:14:14.103     25.600 -    25.716:   91.8812%  (        2)
00:14:14.103     25.716 -    25.833:   91.9689%  (        8)
00:14:14.103     25.833 -    25.949:   91.9908%  (        2)
00:14:14.103     25.949 -    26.065:   92.0237%  (        3)
00:14:14.103     26.065 -    26.182:   92.1004%  (        7)
00:14:14.103     26.182 -    26.298:   92.1551%  (        5)
00:14:14.103     26.298 -    26.415:   92.2318%  (        7)
00:14:14.103     26.415 -    26.531:   92.2976%  (        6)
00:14:14.103     26.531 -    26.647:   92.3852%  (        8)
00:14:14.103     26.647 -    26.764:   92.4400%  (        5)
00:14:14.103     26.764 -    26.880:   92.4948%  (        5)
00:14:14.103     26.880 -    26.996:   92.5386%  (        4)
00:14:14.103     26.996 -    27.113:   92.5496%  (        1)
00:14:14.103     27.113 -    27.229:   92.5715%  (        2)
00:14:14.103     27.229 -    27.345:   92.6372%  (        6)
00:14:14.103     27.345 -    27.462:   92.6591%  (        2)
00:14:14.103     27.462 -    27.578:   92.6811%  (        2)
00:14:14.103     27.578 -    27.695:   92.7797%  (        9)
00:14:14.103     27.695 -    27.811:   92.9002%  (       11)
00:14:14.103     27.811 -    27.927:   92.9659%  (        6)
00:14:14.103     27.927 -    28.044:   92.9988%  (        3)
00:14:14.103     28.044 -    28.160:   93.0536%  (        5)
00:14:14.103     28.160 -    28.276:   93.1084%  (        5)
00:14:14.103     28.276 -    28.393:   93.1193%  (        1)
00:14:14.103     28.393 -    28.509:   93.1631%  (        4)
00:14:14.103     28.509 -    28.625:   93.2070%  (        4)
00:14:14.103     28.625 -    28.742:   93.3056%  (        9)
00:14:14.103     28.742 -    28.858:   93.3384%  (        3)
00:14:14.103     28.858 -    28.975:   93.4151%  (        7)
00:14:14.103     28.975 -    29.091:   93.4809%  (        6)
00:14:14.103     29.091 -    29.207:   93.5357%  (        5)
00:14:14.103     29.207 -    29.324:   93.6891%  (       14)
00:14:14.103     29.324 -    29.440:   93.7877%  (        9)
00:14:14.103     29.440 -    29.556:   93.9958%  (       19)
00:14:14.103     29.556 -    29.673:   94.3684%  (       34)
00:14:14.103     29.673 -    29.789:   94.9052%  (       49)
00:14:14.103     29.789 -    30.022:   96.2748%  (      125)
00:14:14.103     30.022 -    30.255:   97.2061%  (       85)
00:14:14.103     30.255 -    30.487:   97.8416%  (       58)
00:14:14.103     30.487 -    30.720:   98.2250%  (       35)
00:14:14.103     30.720 -    30.953:   98.4442%  (       20)
00:14:14.103     30.953 -    31.185:   98.6195%  (       16)
00:14:14.103     31.185 -    31.418:   98.7619%  (       13)
00:14:14.103     31.418 -    31.651:   98.8934%  (       12)
00:14:14.103     31.651 -    31.884:   98.9153%  (        2)
00:14:14.103     31.884 -    32.116:   98.9701%  (        5)
00:14:14.103     32.116 -    32.349:   98.9920%  (        2)
00:14:14.103     32.349 -    32.582:   99.0249%  (        3)
00:14:14.103     32.582 -    32.815:   99.0577%  (        3)
00:14:14.103     32.815 -    33.047:   99.0797%  (        2)
00:14:14.103     33.047 -    33.280:   99.1016%  (        2)
00:14:14.103     33.513 -    33.745:   99.1125%  (        1)
00:14:14.103     33.745 -    33.978:   99.1454%  (        3)
00:14:14.103     34.444 -    34.676:   99.1673%  (        2)
00:14:14.103     34.909 -    35.142:   99.1783%  (        1)
00:14:14.104     35.142 -    35.375:   99.2002%  (        2)
00:14:14.104     35.375 -    35.607:   99.2440%  (        4)
00:14:14.104     35.607 -    35.840:   99.2550%  (        1)
00:14:14.104     35.840 -    36.073:   99.3207%  (        6)
00:14:14.104     36.073 -    36.305:   99.3426%  (        2)
00:14:14.104     36.305 -    36.538:   99.3864%  (        4)
00:14:14.104     36.538 -    36.771:   99.4193%  (        3)
00:14:14.104     36.771 -    37.004:   99.4303%  (        1)
00:14:14.104     37.004 -    37.236:   99.4631%  (        3)
00:14:14.104     37.236 -    37.469:   99.5070%  (        4)
00:14:14.104     37.469 -    37.702:   99.5617%  (        5)
00:14:14.104     37.702 -    37.935:   99.5837%  (        2)
00:14:14.104     37.935 -    38.167:   99.6275%  (        4)
00:14:14.104     38.167 -    38.400:   99.6494%  (        2)
00:14:14.104     38.633 -    38.865:   99.6603%  (        1)
00:14:14.104     39.098 -    39.331:   99.6713%  (        1)
00:14:14.104     39.331 -    39.564:   99.6823%  (        1)
00:14:14.104     40.262 -    40.495:   99.6932%  (        1)
00:14:14.104     40.960 -    41.193:   99.7042%  (        1)
00:14:14.104     41.425 -    41.658:   99.7151%  (        1)
00:14:14.104     41.891 -    42.124:   99.7261%  (        1)
00:14:14.104     42.589 -    42.822:   99.7370%  (        1)
00:14:14.104     42.822 -    43.055:   99.7480%  (        1)
00:14:14.104     43.055 -    43.287:   99.7590%  (        1)
00:14:14.104     43.287 -    43.520:   99.7699%  (        1)
00:14:14.104     43.985 -    44.218:   99.7809%  (        1)
00:14:14.104     44.684 -    44.916:   99.8028%  (        2)
00:14:14.104     45.149 -    45.382:   99.8247%  (        2)
00:14:14.104     45.382 -    45.615:   99.8357%  (        1)
00:14:14.104     45.615 -    45.847:   99.8466%  (        1)
00:14:14.104     46.313 -    46.545:   99.8576%  (        1)
00:14:14.104     46.778 -    47.011:   99.8685%  (        1)
00:14:14.104     47.011 -    47.244:   99.8795%  (        1)
00:14:14.104     49.804 -    50.036:   99.8904%  (        1)
00:14:14.104     51.433 -    51.665:   99.9014%  (        1)
00:14:14.104     52.364 -    52.596:   99.9123%  (        1)
00:14:14.104     53.062 -    53.295:   99.9233%  (        1)
00:14:14.104     55.855 -    56.087:   99.9343%  (        1)
00:14:14.104     56.785 -    57.018:   99.9452%  (        1)
00:14:14.104     89.833 -    90.298:   99.9562%  (        1)
00:14:14.104     92.625 -    93.091:   99.9781%  (        2)
00:14:14.104    104.727 -   105.193:   99.9890%  (        1)
00:14:14.104    226.211 -   227.142:  100.0000%  (        1)
00:14:14.104  
00:14:14.104  Complete histogram
00:14:14.104  ==================
00:14:14.104         Range in us     Cumulative     Count
00:14:14.104      8.785 -     8.844:    0.0110%  (        1)
00:14:14.104      8.902 -     8.960:    0.0219%  (        1)
00:14:14.104      8.960 -     9.018:    0.1205%  (        9)
00:14:14.104      9.018 -     9.076:    0.5807%  (       42)
00:14:14.104      9.076 -     9.135:    1.6325%  (       96)
00:14:14.104      9.135 -     9.193:    3.4075%  (      162)
00:14:14.104      9.193 -     9.251:    5.5221%  (      193)
00:14:14.104      9.251 -     9.309:    7.7134%  (      200)
00:14:14.104      9.309 -     9.367:    9.9375%  (      203)
00:14:14.104      9.367 -     9.425:   12.9506%  (      275)
00:14:14.104      9.425 -     9.484:   16.7963%  (      351)
00:14:14.104      9.484 -     9.542:   20.2147%  (      312)
00:14:14.104      9.542 -     9.600:   23.8852%  (      335)
00:14:14.104      9.600 -     9.658:   27.3036%  (      312)
00:14:14.104      9.658 -     9.716:   31.4452%  (      378)
00:14:14.104      9.716 -     9.775:   35.9921%  (      415)
00:14:14.104      9.775 -     9.833:   40.8897%  (      447)
00:14:14.104      9.833 -     9.891:   44.9545%  (      371)
00:14:14.104      9.891 -     9.949:   48.1867%  (      295)
00:14:14.104      9.949 -    10.007:   50.5204%  (      213)
00:14:14.104     10.007 -    10.065:   53.0623%  (      232)
00:14:14.104     10.065 -    10.124:   54.8811%  (      166)
00:14:14.104     10.124 -    10.182:   56.3602%  (      135)
00:14:14.104     10.182 -    10.240:   57.3902%  (       94)
00:14:14.104     10.240 -    10.298:   58.2229%  (       76)
00:14:14.104     10.298 -    10.356:   58.7816%  (       51)
00:14:14.104     10.356 -    10.415:   59.3842%  (       55)
00:14:14.104     10.415 -    10.473:   60.0526%  (       61)
00:14:14.104     10.473 -    10.531:   60.6881%  (       58)
00:14:14.104     10.531 -    10.589:   61.4002%  (       65)
00:14:14.104     10.589 -    10.647:   61.9371%  (       49)
00:14:14.104     10.647 -    10.705:   62.4192%  (       44)
00:14:14.104     10.705 -    10.764:   62.7150%  (       27)
00:14:14.104     10.764 -    10.822:   63.0875%  (       34)
00:14:14.104     10.822 -    10.880:   63.3505%  (       24)
00:14:14.104     10.880 -    10.938:   63.5587%  (       19)
00:14:14.104     10.938 -    10.996:   63.6792%  (       11)
00:14:14.104     10.996 -    11.055:   63.8216%  (       13)
00:14:14.104     11.055 -    11.113:   63.8545%  (        3)
00:14:14.104     11.113 -    11.171:   63.8874%  (        3)
00:14:14.104     11.171 -    11.229:   63.9202%  (        3)
00:14:14.104     11.229 -    11.287:   63.9860%  (        6)
00:14:14.104     11.287 -    11.345:   64.0517%  (        6)
00:14:14.104     11.345 -    11.404:   64.0846%  (        3)
00:14:14.104     11.404 -    11.462:   64.1175%  (        3)
00:14:14.104     11.462 -    11.520:   64.1284%  (        1)
00:14:14.104     11.520 -    11.578:   64.2161%  (        8)
00:14:14.104     11.578 -    11.636:   64.6324%  (       38)
00:14:14.104     11.636 -    11.695:   65.3008%  (       61)
00:14:14.104     11.695 -    11.753:   66.8347%  (      140)
00:14:14.104     11.753 -    11.811:   68.9383%  (      192)
00:14:14.104     11.811 -    11.869:   71.8089%  (      262)
00:14:14.104     11.869 -    11.927:   74.7562%  (      269)
00:14:14.104     11.927 -    11.985:   77.3967%  (      241)
00:14:14.104     11.985 -    12.044:   80.0373%  (      241)
00:14:14.104     12.044 -    12.102:   81.8670%  (      167)
00:14:14.104     12.102 -    12.160:   83.0941%  (      112)
00:14:14.104     12.160 -    12.218:   83.8611%  (       70)
00:14:14.104     12.218 -    12.276:   84.6280%  (       70)
00:14:14.104     12.276 -    12.335:   85.0444%  (       38)
00:14:14.104     12.335 -    12.393:   85.3731%  (       30)
00:14:14.104     12.393 -    12.451:   85.6251%  (       23)
00:14:14.104     12.451 -    12.509:   85.8004%  (       16)
00:14:14.104     12.509 -    12.567:   85.9428%  (       13)
00:14:14.104     12.567 -    12.625:   86.0962%  (       14)
00:14:14.104     12.625 -    12.684:   86.2167%  (       11)
00:14:14.104     12.684 -    12.742:   86.4687%  (       23)
00:14:14.104     12.742 -    12.800:   86.8412%  (       34)
00:14:14.104     12.800 -    12.858:   87.1699%  (       30)
00:14:14.104     12.858 -    12.916:   87.5644%  (       36)
00:14:14.104     12.916 -    12.975:   87.9698%  (       37)
00:14:14.104     12.975 -    13.033:   88.3423%  (       34)
00:14:14.104     13.033 -    13.091:   88.6272%  (       26)
00:14:14.104     13.091 -    13.149:   88.8682%  (       22)
00:14:14.104     13.149 -    13.207:   89.0435%  (       16)
00:14:14.104     13.207 -    13.265:   89.1750%  (       12)
00:14:14.104     13.265 -    13.324:   89.2188%  (        4)
00:14:14.104     13.324 -    13.382:   89.2845%  (        6)
00:14:14.104     13.382 -    13.440:   89.3174%  (        3)
00:14:14.104     13.440 -    13.498:   89.3722%  (        5)
00:14:14.104     13.498 -    13.556:   89.4160%  (        4)
00:14:14.104     13.556 -    13.615:   89.4708%  (        5)
00:14:14.104     13.615 -    13.673:   89.5475%  (        7)
00:14:14.104     13.673 -    13.731:   89.5585%  (        1)
00:14:14.104     13.731 -    13.789:   89.5694%  (        1)
00:14:14.104     13.789 -    13.847:   89.6132%  (        4)
00:14:14.104     13.847 -    13.905:   89.6461%  (        3)
00:14:14.104     13.905 -    13.964:   89.7118%  (        6)
00:14:14.104     13.964 -    14.022:   89.7666%  (        5)
00:14:14.104     14.022 -    14.080:   89.8433%  (        7)
00:14:14.104     14.080 -    14.138:   89.8762%  (        3)
00:14:14.104     14.138 -    14.196:   89.8871%  (        1)
00:14:14.104     14.196 -    14.255:   89.9529%  (        6)
00:14:14.104     14.255 -    14.313:   90.0296%  (        7)
00:14:14.104     14.313 -    14.371:   90.0405%  (        1)
00:14:14.104     14.371 -    14.429:   90.0515%  (        1)
00:14:14.104     14.429 -    14.487:   90.0734%  (        2)
00:14:14.104     14.487 -    14.545:   90.1172%  (        4)
00:14:14.104     14.545 -    14.604:   90.1391%  (        2)
00:14:14.104     14.604 -    14.662:   90.1611%  (        2)
00:14:14.104     14.662 -    14.720:   90.1830%  (        2)
00:14:14.104     14.720 -    14.778:   90.2268%  (        4)
00:14:14.104     14.778 -    14.836:   90.2597%  (        3)
00:14:14.104     14.836 -    14.895:   90.3145%  (        5)
00:14:14.104     14.895 -    15.011:   90.3802%  (        6)
00:14:14.104     15.011 -    15.127:   90.4240%  (        4)
00:14:14.104     15.127 -    15.244:   90.4788%  (        5)
00:14:14.104     15.244 -    15.360:   90.5774%  (        9)
00:14:14.104     15.360 -    15.476:   90.6541%  (        7)
00:14:14.104     15.476 -    15.593:   90.7308%  (        7)
00:14:14.104     15.593 -    15.709:   90.7856%  (        5)
00:14:14.104     15.709 -    15.825:   90.8294%  (        4)
00:14:14.104     15.825 -    15.942:   90.8732%  (        4)
00:14:14.104     15.942 -    16.058:   90.9280%  (        5)
00:14:14.104     16.058 -    16.175:   91.0376%  (       10)
00:14:14.104     16.175 -    16.291:   91.1362%  (        9)
00:14:14.104     16.291 -    16.407:   91.2458%  (       10)
00:14:14.104     16.407 -    16.524:   91.3005%  (        5)
00:14:14.104     16.524 -    16.640:   91.4320%  (       12)
00:14:14.104     16.640 -    16.756:   91.5416%  (       10)
00:14:14.104     16.756 -    16.873:   91.6292%  (        8)
00:14:14.104     16.873 -    16.989:   91.7059%  (        7)
00:14:14.104     16.989 -    17.105:   91.7607%  (        5)
00:14:14.104     17.105 -    17.222:   91.8484%  (        8)
00:14:14.104     17.222 -    17.338:   91.9798%  (       12)
00:14:14.104     17.338 -    17.455:   92.0894%  (       10)
00:14:14.104     17.455 -    17.571:   92.1332%  (        4)
00:14:14.104     17.571 -    17.687:   92.2099%  (        7)
00:14:14.104     17.687 -    17.804:   92.3414%  (       12)
00:14:14.104     17.804 -    17.920:   92.4071%  (        6)
00:14:14.104     17.920 -    18.036:   92.4838%  (        7)
00:14:14.104     18.036 -    18.153:   92.5386%  (        5)
00:14:14.104     18.153 -    18.269:   92.6153%  (        7)
00:14:14.104     18.269 -    18.385:   92.6591%  (        4)
00:14:14.104     18.385 -    18.502:   92.7139%  (        5)
00:14:14.104     18.618 -    18.735:   92.7249%  (        1)
00:14:14.104     18.735 -    18.851:   92.7687%  (        4)
00:14:14.104     18.851 -    18.967:   92.7797%  (        1)
00:14:14.104     18.967 -    19.084:   92.8016%  (        2)
00:14:14.104     19.084 -    19.200:   92.8125%  (        1)
00:14:14.105     19.433 -    19.549:   92.8673%  (        5)
00:14:14.105     19.665 -    19.782:   92.9002%  (        3)
00:14:14.105     19.898 -    20.015:   92.9221%  (        2)
00:14:14.105     20.015 -    20.131:   92.9440%  (        2)
00:14:14.105     20.131 -    20.247:   93.0098%  (        6)
00:14:14.105     20.247 -    20.364:   93.0645%  (        5)
00:14:14.105     20.364 -    20.480:   93.1084%  (        4)
00:14:14.105     20.480 -    20.596:   93.1741%  (        6)
00:14:14.105     20.596 -    20.713:   93.2179%  (        4)
00:14:14.105     20.713 -    20.829:   93.2618%  (        4)
00:14:14.105     20.829 -    20.945:   93.3056%  (        4)
00:14:14.105     20.945 -    21.062:   93.3494%  (        4)
00:14:14.105     21.062 -    21.178:   93.4042%  (        5)
00:14:14.105     21.178 -    21.295:   93.4480%  (        4)
00:14:14.105     21.295 -    21.411:   93.5138%  (        6)
00:14:14.105     21.411 -    21.527:   93.5685%  (        5)
00:14:14.105     21.527 -    21.644:   93.6124%  (        4)
00:14:14.105     21.644 -    21.760:   93.6671%  (        5)
00:14:14.105     21.760 -    21.876:   93.6891%  (        2)
00:14:14.105     21.876 -    21.993:   93.7329%  (        4)
00:14:14.105     21.993 -    22.109:   93.7657%  (        3)
00:14:14.105     22.109 -    22.225:   93.8205%  (        5)
00:14:14.105     22.225 -    22.342:   93.8424%  (        2)
00:14:14.105     22.342 -    22.458:   93.8753%  (        3)
00:14:14.105     22.458 -    22.575:   93.8972%  (        2)
00:14:14.105     22.575 -    22.691:   93.9082%  (        1)
00:14:14.105     22.691 -    22.807:   93.9520%  (        4)
00:14:14.105     22.807 -    22.924:   93.9739%  (        2)
00:14:14.105     22.924 -    23.040:   93.9958%  (        2)
00:14:14.105     23.040 -    23.156:   94.0068%  (        1)
00:14:14.105     23.156 -    23.273:   94.0177%  (        1)
00:14:14.105     23.389 -    23.505:   94.0287%  (        1)
00:14:14.105     23.622 -    23.738:   94.0506%  (        2)
00:14:14.105     23.738 -    23.855:   94.0616%  (        1)
00:14:14.105     23.855 -    23.971:   94.1273%  (        6)
00:14:14.105     23.971 -    24.087:   94.1602%  (        3)
00:14:14.105     24.087 -    24.204:   94.2807%  (       11)
00:14:14.105     24.204 -    24.320:   94.5327%  (       23)
00:14:14.105     24.320 -    24.436:   94.9710%  (       40)
00:14:14.105     24.436 -    24.553:   95.4859%  (       47)
00:14:14.105     24.553 -    24.669:   95.9461%  (       42)
00:14:14.105     24.669 -    24.785:   96.5049%  (       51)
00:14:14.105     24.785 -    24.902:   97.0637%  (       51)
00:14:14.105     24.902 -    25.018:   97.5786%  (       47)
00:14:14.105     25.018 -    25.135:   97.9730%  (       36)
00:14:14.105     25.135 -    25.251:   98.2689%  (       27)
00:14:14.105     25.251 -    25.367:   98.5757%  (       28)
00:14:14.105     25.367 -    25.484:   98.7290%  (       14)
00:14:14.105     25.484 -    25.600:   98.8605%  (       12)
00:14:14.105     25.600 -    25.716:   98.9263%  (        6)
00:14:14.105     25.716 -    25.833:   98.9701%  (        4)
00:14:14.105     25.833 -    25.949:   99.0468%  (        7)
00:14:14.105     25.949 -    26.065:   99.0797%  (        3)
00:14:14.105     26.065 -    26.182:   99.1344%  (        5)
00:14:14.105     26.182 -    26.298:   99.1454%  (        1)
00:14:14.105     26.298 -    26.415:   99.1673%  (        2)
00:14:14.105     26.415 -    26.531:   99.1783%  (        1)
00:14:14.105     26.531 -    26.647:   99.1892%  (        1)
00:14:14.105     26.647 -    26.764:   99.2111%  (        2)
00:14:14.105     26.764 -    26.880:   99.2221%  (        1)
00:14:14.105     26.880 -    26.996:   99.2330%  (        1)
00:14:14.105     26.996 -    27.113:   99.2769%  (        4)
00:14:14.105     27.113 -    27.229:   99.2988%  (        2)
00:14:14.105     27.229 -    27.345:   99.3097%  (        1)
00:14:14.105     27.345 -    27.462:   99.3207%  (        1)
00:14:14.105     27.462 -    27.578:   99.3317%  (        1)
00:14:14.105     27.927 -    28.044:   99.3426%  (        1)
00:14:14.105     28.509 -    28.625:   99.3536%  (        1)
00:14:14.105     28.742 -    28.858:   99.3645%  (        1)
00:14:14.105     29.673 -    29.789:   99.3755%  (        1)
00:14:14.105     30.255 -    30.487:   99.3864%  (        1)
00:14:14.105     30.487 -    30.720:   99.3974%  (        1)
00:14:14.105     30.720 -    30.953:   99.4083%  (        1)
00:14:14.105     30.953 -    31.185:   99.4522%  (        4)
00:14:14.105     31.185 -    31.418:   99.4850%  (        3)
00:14:14.105     31.418 -    31.651:   99.5070%  (        2)
00:14:14.105     31.651 -    31.884:   99.5837%  (        7)
00:14:14.105     31.884 -    32.116:   99.6275%  (        4)
00:14:14.105     32.116 -    32.349:   99.6494%  (        2)
00:14:14.105     32.349 -    32.582:   99.6932%  (        4)
00:14:14.105     32.582 -    32.815:   99.7042%  (        1)
00:14:14.105     32.815 -    33.047:   99.7261%  (        2)
00:14:14.105     33.047 -    33.280:   99.7699%  (        4)
00:14:14.105     33.280 -    33.513:   99.8028%  (        3)
00:14:14.105     33.513 -    33.745:   99.8357%  (        3)
00:14:14.105     34.444 -    34.676:   99.8466%  (        1)
00:14:14.105     34.676 -    34.909:   99.8576%  (        1)
00:14:14.105     35.142 -    35.375:   99.8685%  (        1)
00:14:14.105     36.771 -    37.004:   99.8795%  (        1)
00:14:14.105     39.796 -    40.029:   99.9014%  (        2)
00:14:14.105     40.495 -    40.727:   99.9123%  (        1)
00:14:14.105     40.960 -    41.193:   99.9233%  (        1)
00:14:14.105     41.193 -    41.425:   99.9343%  (        1)
00:14:14.105     49.105 -    49.338:   99.9452%  (        1)
00:14:14.105     49.571 -    49.804:   99.9562%  (        1)
00:14:14.105     51.200 -    51.433:   99.9671%  (        1)
00:14:14.105     86.109 -    86.575:   99.9781%  (        1)
00:14:14.105     94.487 -    94.953:   99.9890%  (        1)
00:14:14.105    105.193 -   105.658:  100.0000%  (        1)
00:14:14.105  
00:14:14.105  
00:14:14.105  real	0m1.274s
00:14:14.105  user	0m1.095s
00:14:14.105  sys	0m0.136s
00:14:14.105   05:59:34 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:14.105   05:59:34 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:14:14.105  ************************************
00:14:14.105  END TEST nvme_overhead
00:14:14.105  ************************************
00:14:14.105   05:59:34 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:14:14.105   05:59:34 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:14:14.105   05:59:34 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:14.105   05:59:34 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:14.105  ************************************
00:14:14.105  START TEST nvme_arbitration
00:14:14.105  ************************************
00:14:14.105   05:59:34 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:14:17.389  Initializing NVMe Controllers
00:14:17.389  Attached to 0000:00:10.0
00:14:17.389  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:14:17.389  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:14:17.389  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:14:17.389  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:14:17.389  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:14:17.389  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:14:17.389  Initialization complete. Launching workers.
00:14:17.389  Starting thread on core 1 with urgent priority queue
00:14:17.389  Starting thread on core 2 with urgent priority queue
00:14:17.389  Starting thread on core 3 with urgent priority queue
00:14:17.389  Starting thread on core 0 with urgent priority queue
00:14:17.389  QEMU NVMe Ctrl       (12340               ) core 0: 10042.00 IO/s     9.96 secs/100000 ios
00:14:17.389  QEMU NVMe Ctrl       (12340               ) core 1: 10107.33 IO/s     9.89 secs/100000 ios
00:14:17.389  QEMU NVMe Ctrl       (12340               ) core 2:  4482.33 IO/s    22.31 secs/100000 ios
00:14:17.389  QEMU NVMe Ctrl       (12340               ) core 3:  4672.00 IO/s    21.40 secs/100000 ios
00:14:17.389  ========================================================
00:14:17.389  
00:14:17.389  
00:14:17.389  real	0m3.282s
00:14:17.389  user	0m9.065s
00:14:17.389  sys	0m0.168s
00:14:17.389   05:59:38 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:17.389   05:59:38 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:14:17.389  ************************************
00:14:17.389  END TEST nvme_arbitration
00:14:17.389  ************************************
00:14:17.389   05:59:38 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:14:17.389   05:59:38 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:14:17.389   05:59:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:17.389   05:59:38 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:17.389  ************************************
00:14:17.390  START TEST nvme_single_aen
00:14:17.390  ************************************
00:14:17.390   05:59:38 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:14:17.649  Asynchronous Event Request test
00:14:17.649  Attached to 0000:00:10.0
00:14:17.649  Reset controller to setup AER completions for this process
00:14:17.649  Registering asynchronous event callbacks...
00:14:17.649  Getting orig temperature thresholds of all controllers
00:14:17.649  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:14:17.649  Setting all controllers temperature threshold low to trigger AER
00:14:17.649  Waiting for all controllers temperature threshold to be set lower
00:14:17.649  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:14:17.649  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:14:17.649  Waiting for all controllers to trigger AER and reset threshold
00:14:17.649  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:14:17.649  Cleaning up...
00:14:17.649  
00:14:17.649  real	0m0.284s
00:14:17.649  user	0m0.108s
00:14:17.649  sys	0m0.134s
00:14:17.649   05:59:38 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:17.649   05:59:38 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:14:17.649  ************************************
00:14:17.649  END TEST nvme_single_aen
00:14:17.649  ************************************
00:14:17.649   05:59:38 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:14:17.649   05:59:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:17.649   05:59:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:17.649   05:59:38 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:17.649  ************************************
00:14:17.649  START TEST nvme_doorbell_aers
00:14:17.649  ************************************
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:17.649     05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:17.649     05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:17.649    05:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:14:17.649   05:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:14:18.216  [2024-11-18 05:59:38.923682] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88404) is not found. Dropping the request.
00:14:28.230  Executing: test_write_invalid_db
00:14:28.230  Waiting for AER completion...
00:14:28.230  Failure: test_write_invalid_db
00:14:28.230  
00:14:28.230  Executing: test_invalid_db_write_overflow_sq
00:14:28.230  Waiting for AER completion...
00:14:28.230  Failure: test_invalid_db_write_overflow_sq
00:14:28.230  
00:14:28.230  Executing: test_invalid_db_write_overflow_cq
00:14:28.230  Waiting for AER completion...
00:14:28.230  Failure: test_invalid_db_write_overflow_cq
00:14:28.230  
00:14:28.230  
00:14:28.230  real	0m10.097s
00:14:28.230  user	0m8.664s
00:14:28.230  sys	0m1.360s
00:14:28.230   05:59:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:28.230   05:59:48 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:14:28.230  ************************************
00:14:28.230  END TEST nvme_doorbell_aers
00:14:28.230  ************************************
00:14:28.230    05:59:48 nvme -- nvme/nvme.sh@97 -- # uname
00:14:28.230   05:59:48 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:14:28.230   05:59:48 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:14:28.230   05:59:48 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:14:28.230   05:59:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:28.230   05:59:48 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:28.230  ************************************
00:14:28.230  START TEST nvme_multi_aen
00:14:28.230  ************************************
00:14:28.230   05:59:48 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:14:28.230  [2024-11-18 05:59:48.954611] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88404) is not found. Dropping the request.
00:14:28.230  [2024-11-18 05:59:48.954741] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88404) is not found. Dropping the request.
00:14:28.230  [2024-11-18 05:59:48.954769] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88404) is not found. Dropping the request.
00:14:28.230  Child process pid: 88586
00:14:28.230  [Child] Asynchronous Event Request test
00:14:28.230  [Child] Attached to 0000:00:10.0
00:14:28.230  [Child] Registering asynchronous event callbacks...
00:14:28.230  [Child] Getting orig temperature thresholds of all controllers
00:14:28.230  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:14:28.230  [Child] Waiting for all controllers to trigger AER and reset threshold
00:14:28.230  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:14:28.230  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:14:28.230  [Child] Cleaning up...
00:14:28.488  Asynchronous Event Request test
00:14:28.488  Attached to 0000:00:10.0
00:14:28.488  Reset controller to setup AER completions for this process
00:14:28.488  Registering asynchronous event callbacks...
00:14:28.488  Getting orig temperature thresholds of all controllers
00:14:28.488  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:14:28.488  Setting all controllers temperature threshold low to trigger AER
00:14:28.488  Waiting for all controllers temperature threshold to be set lower
00:14:28.488  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:14:28.488  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:14:28.488  Waiting for all controllers to trigger AER and reset threshold
00:14:28.488  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:14:28.488  Cleaning up...
00:14:28.488  
00:14:28.488  real	0m0.507s
00:14:28.488  user	0m0.175s
00:14:28.488  sys	0m0.217s
00:14:28.488   05:59:49 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:28.488   05:59:49 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:14:28.488  ************************************
00:14:28.488  END TEST nvme_multi_aen
00:14:28.488  ************************************
00:14:28.488   05:59:49 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:14:28.488   05:59:49 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:14:28.488   05:59:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:28.488   05:59:49 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:28.488  ************************************
00:14:28.488  START TEST nvme_startup
00:14:28.488  ************************************
00:14:28.488   05:59:49 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:14:28.747  Initializing NVMe Controllers
00:14:28.747  Attached to 0000:00:10.0
00:14:28.747  Initialization complete.
00:14:28.747  Time used:171396.812      (us).
00:14:28.747  
00:14:28.747  real	0m0.235s
00:14:28.747  user	0m0.084s
00:14:28.747  sys	0m0.105s
00:14:28.747   05:59:49 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:28.747   05:59:49 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:14:28.747  ************************************
00:14:28.747  END TEST nvme_startup
00:14:28.747  ************************************
00:14:28.747   05:59:49 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:14:28.747   05:59:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:28.747   05:59:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:28.747   05:59:49 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:28.747  ************************************
00:14:28.747  START TEST nvme_multi_secondary
00:14:28.747  ************************************
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=88632
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=88633
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:14:28.747   05:59:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:14:32.035  Initializing NVMe Controllers
00:14:32.035  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:32.035  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:14:32.035  Initialization complete. Launching workers.
00:14:32.035  ========================================================
00:14:32.035                                                                             Latency(us)
00:14:32.035  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:32.035  PCIE (0000:00:10.0) NSID 1 from core  2:   13979.16      54.61    1144.04     169.75   10583.06
00:14:32.035  ========================================================
00:14:32.035  Total                                  :   13979.16      54.61    1144.04     169.75   10583.06
00:14:32.035  
00:14:32.035   05:59:52 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 88632
00:14:32.294  Initializing NVMe Controllers
00:14:32.294  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:32.294  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:14:32.294  Initialization complete. Launching workers.
00:14:32.294  ========================================================
00:14:32.294                                                                             Latency(us)
00:14:32.294  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:32.294  PCIE (0000:00:10.0) NSID 1 from core  1:   31166.42     121.74     512.94     161.36    1641.79
00:14:32.294  ========================================================
00:14:32.294  Total                                  :   31166.42     121.74     512.94     161.36    1641.79
00:14:32.294  
00:14:34.197  Initializing NVMe Controllers
00:14:34.197  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:34.197  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:34.197  Initialization complete. Launching workers.
00:14:34.197  ========================================================
00:14:34.197                                                                             Latency(us)
00:14:34.197  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:34.197  PCIE (0000:00:10.0) NSID 1 from core  0:   39701.73     155.08     402.60     148.64    1675.54
00:14:34.197  ========================================================
00:14:34.197  Total                                  :   39701.73     155.08     402.60     148.64    1675.54
00:14:34.197  
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 88633
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=88708
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=88709
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:14:34.197   05:59:54 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:14:37.476  Initializing NVMe Controllers
00:14:37.476  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:37.476  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:14:37.476  Initialization complete. Launching workers.
00:14:37.476  ========================================================
00:14:37.476                                                                             Latency(us)
00:14:37.476  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:37.476  PCIE (0000:00:10.0) NSID 1 from core  1:   33297.38     130.07     480.06     156.29    1588.16
00:14:37.476  ========================================================
00:14:37.476  Total                                  :   33297.38     130.07     480.06     156.29    1588.16
00:14:37.476  
00:14:37.734  Initializing NVMe Controllers
00:14:37.734  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:37.734  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:37.734  Initialization complete. Launching workers.
00:14:37.734  ========================================================
00:14:37.734                                                                             Latency(us)
00:14:37.734  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:37.734  PCIE (0000:00:10.0) NSID 1 from core  0:   33625.61     131.35     475.38     149.47    1437.25
00:14:37.734  ========================================================
00:14:37.734  Total                                  :   33625.61     131.35     475.38     149.47    1437.25
00:14:37.734  
00:14:39.635  Initializing NVMe Controllers
00:14:39.635  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:39.635  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:14:39.635  Initialization complete. Launching workers.
00:14:39.635  ========================================================
00:14:39.635                                                                             Latency(us)
00:14:39.635  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:39.635  PCIE (0000:00:10.0) NSID 1 from core  2:   17218.20      67.26     928.40     164.75    8461.65
00:14:39.635  ========================================================
00:14:39.635  Total                                  :   17218.20      67.26     928.40     164.75    8461.65
00:14:39.635  
00:14:39.635   06:00:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 88708
00:14:39.635   06:00:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 88709
00:14:39.635  
00:14:39.635  real	0m10.732s
00:14:39.635  user	0m18.541s
00:14:39.635  sys	0m0.847s
00:14:39.635   06:00:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:39.635   06:00:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:14:39.635  ************************************
00:14:39.635  END TEST nvme_multi_secondary
00:14:39.635  ************************************
00:14:39.635   06:00:00 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:14:39.636   06:00:00 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/88045 ]]
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1094 -- # kill 88045
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1095 -- # wait 88045
00:14:39.636  [2024-11-18 06:00:00.332476] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88585) is not found. Dropping the request.
00:14:39.636  [2024-11-18 06:00:00.332572] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88585) is not found. Dropping the request.
00:14:39.636  [2024-11-18 06:00:00.332606] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88585) is not found. Dropping the request.
00:14:39.636  [2024-11-18 06:00:00.332631] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 88585) is not found. Dropping the request.
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:14:39.636   06:00:00 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:39.636   06:00:00 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:39.636  ************************************
00:14:39.636  START TEST bdev_nvme_reset_stuck_adm_cmd
00:14:39.636  ************************************
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:14:39.636  * Looking for test storage...
00:14:39.636  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:39.636  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:39.636  		--rc genhtml_branch_coverage=1
00:14:39.636  		--rc genhtml_function_coverage=1
00:14:39.636  		--rc genhtml_legend=1
00:14:39.636  		--rc geninfo_all_blocks=1
00:14:39.636  		--rc geninfo_unexecuted_blocks=1
00:14:39.636  		
00:14:39.636  		'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:39.636  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:39.636  		--rc genhtml_branch_coverage=1
00:14:39.636  		--rc genhtml_function_coverage=1
00:14:39.636  		--rc genhtml_legend=1
00:14:39.636  		--rc geninfo_all_blocks=1
00:14:39.636  		--rc geninfo_unexecuted_blocks=1
00:14:39.636  		
00:14:39.636  		'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:39.636  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:39.636  		--rc genhtml_branch_coverage=1
00:14:39.636  		--rc genhtml_function_coverage=1
00:14:39.636  		--rc genhtml_legend=1
00:14:39.636  		--rc geninfo_all_blocks=1
00:14:39.636  		--rc geninfo_unexecuted_blocks=1
00:14:39.636  		
00:14:39.636  		'
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:39.636  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:39.636  		--rc genhtml_branch_coverage=1
00:14:39.636  		--rc genhtml_function_coverage=1
00:14:39.636  		--rc genhtml_legend=1
00:14:39.636  		--rc geninfo_all_blocks=1
00:14:39.636  		--rc geninfo_unexecuted_blocks=1
00:14:39.636  		
00:14:39.636  		'
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:14:39.636   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:14:39.636    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:14:39.636     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:14:39.895     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:39.895     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:14:39.895     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:39.895      06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:39.895      06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:39.895     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:39.895     06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:39.895    06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=88852
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 88852
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 88852 ']'
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:39.895  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:39.895   06:00:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:39.895  [2024-11-18 06:00:00.734989] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:39.895  [2024-11-18 06:00:00.735223] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88852 ]
00:14:40.154  [2024-11-18 06:00:00.919268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:14:40.154  [2024-11-18 06:00:00.949613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:40.154  [2024-11-18 06:00:00.949777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:40.155  [2024-11-18 06:00:00.949800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:40.155  [2024-11-18 06:00:00.949838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:40.721   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:40.721   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:14:40.721   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:14:40.721   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.721   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:40.980  nvme0n1
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.980    06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_PAOw7.txt
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:40.980  true
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:40.980    06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731909601
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=88875
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:14:40.980   06:00:01 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:42.880  [2024-11-18 06:00:03.780650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:14:42.880  [2024-11-18 06:00:03.781105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:14:42.880  [2024-11-18 06:00:03.781152] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:14:42.880  [2024-11-18 06:00:03.781200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:14:42.880  [2024-11-18 06:00:03.783587] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.880  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 88875
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 88875
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 88875
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_PAOw7.txt
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:14:42.880     06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:14:42.880     06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:14:42.880      06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:14:42.880   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:14:42.880    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:14:42.880     06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:14:42.880     06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:14:42.880      06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:14:43.139    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:14:43.139    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_PAOw7.txt
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 88852
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 88852 ']'
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 88852
00:14:43.139    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:43.139    06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88852
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:43.139  killing process with pid 88852
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88852'
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 88852
00:14:43.139   06:00:03 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 88852
00:14:43.397   06:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:14:43.398   06:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:14:43.398  
00:14:43.398  real	0m3.780s
00:14:43.398  user	0m13.450s
00:14:43.398  sys	0m0.597s
00:14:43.398   06:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:43.398   06:00:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:14:43.398  ************************************
00:14:43.398  END TEST bdev_nvme_reset_stuck_adm_cmd
00:14:43.398  ************************************
00:14:43.398   06:00:04 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:14:43.398   06:00:04 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:14:43.398   06:00:04 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:43.398   06:00:04 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:43.398   06:00:04 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:43.398  ************************************
00:14:43.398  START TEST nvme_fio
00:14:43.398  ************************************
00:14:43.398   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:14:43.398    06:00:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:14:43.398    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:43.398    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:14:43.398    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:43.398     06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:43.398     06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:43.398    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:43.398    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0')
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:14:43.398   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:14:43.656   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:14:43.656   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:14:43.945   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:14:43.945   06:00:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:14:43.945    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:14:43.945    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:14:43.945    06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.8
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.8 ]]
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:14:43.945   06:00:04 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:14:44.204  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:14:44.204  fio-3.35
00:14:44.204  Starting 1 thread
00:14:47.495  
00:14:47.495  test: (groupid=0, jobs=1): err= 0: pid=89003: Mon Nov 18 06:00:07 2024
00:14:47.495    read: IOPS=16.0k, BW=62.3MiB/s (65.4MB/s)(125MiB/2001msec)
00:14:47.495      slat (nsec): min=3891, max=71029, avg=5979.21, stdev=2221.38
00:14:47.495      clat (usec): min=350, max=10277, avg=4012.83, stdev=509.70
00:14:47.495       lat (usec): min=356, max=10285, avg=4018.81, stdev=510.34
00:14:47.495      clat percentiles (usec):
00:14:47.495       |  1.00th=[ 3294],  5.00th=[ 3556], 10.00th=[ 3621], 20.00th=[ 3720],
00:14:47.495       | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3916], 60.00th=[ 3982],
00:14:47.495       | 70.00th=[ 4080], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4621],
00:14:47.495       | 99.00th=[ 6652], 99.50th=[ 7242], 99.90th=[ 8094], 99.95th=[ 8979],
00:14:47.495       | 99.99th=[ 9896]
00:14:47.495     bw (  KiB/s): min=63440, max=68384, per=100.00%, avg=65776.00, stdev=2483.20, samples=3
00:14:47.495     iops        : min=15860, max=17096, avg=16444.00, stdev=620.80, samples=3
00:14:47.495    write: IOPS=16.0k, BW=62.5MiB/s (65.5MB/s)(125MiB/2001msec); 0 zone resets
00:14:47.495      slat (nsec): min=3964, max=48195, avg=6080.37, stdev=2229.46
00:14:47.495      clat (usec): min=339, max=10960, avg=3976.95, stdev=572.24
00:14:47.495       lat (usec): min=345, max=10969, avg=3983.03, stdev=572.90
00:14:47.495      clat percentiles (usec):
00:14:47.495       |  1.00th=[ 3228],  5.00th=[ 3523], 10.00th=[ 3589], 20.00th=[ 3687],
00:14:47.495       | 30.00th=[ 3752], 40.00th=[ 3785], 50.00th=[ 3851], 60.00th=[ 3916],
00:14:47.495       | 70.00th=[ 4047], 80.00th=[ 4228], 90.00th=[ 4424], 95.00th=[ 4555],
00:14:47.495       | 99.00th=[ 6783], 99.50th=[ 7242], 99.90th=[10683], 99.95th=[10814],
00:14:47.495       | 99.99th=[10945]
00:14:47.495     bw (  KiB/s): min=63744, max=68200, per=100.00%, avg=65613.33, stdev=2312.99, samples=3
00:14:47.495     iops        : min=15936, max=17050, avg=16403.33, stdev=578.25, samples=3
00:14:47.495    lat (usec)   : 500=0.01%, 750=0.01%, 1000=0.01%
00:14:47.495    lat (msec)   : 2=0.12%, 4=64.70%, 10=35.07%, 20=0.08%
00:14:47.495    cpu          : usr=99.75%, sys=0.25%, ctx=6, majf=0, minf=625
00:14:47.495    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:14:47.495       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:14:47.495       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:14:47.495       issued rwts: total=31930,31995,0,0 short=0,0,0,0 dropped=0,0,0,0
00:14:47.495       latency   : target=0, window=0, percentile=100.00%, depth=128
00:14:47.495  
00:14:47.495  Run status group 0 (all jobs):
00:14:47.495     READ: bw=62.3MiB/s (65.4MB/s), 62.3MiB/s-62.3MiB/s (65.4MB/s-65.4MB/s), io=125MiB (131MB), run=2001-2001msec
00:14:47.495    WRITE: bw=62.5MiB/s (65.5MB/s), 62.5MiB/s-62.5MiB/s (65.5MB/s-65.5MB/s), io=125MiB (131MB), run=2001-2001msec
00:14:47.495  -----------------------------------------------------
00:14:47.495  Suppressions used:
00:14:47.495    count      bytes template
00:14:47.495        1         32 /usr/src/fio/parse.c
00:14:47.495  -----------------------------------------------------
00:14:47.495  
00:14:47.495   06:00:08 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:14:47.495   06:00:08 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:14:47.495  
00:14:47.495  real	0m3.901s
00:14:47.495  user	0m3.202s
00:14:47.495  sys	0m0.341s
00:14:47.495   06:00:08 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:47.495  ************************************
00:14:47.495  END TEST nvme_fio
00:14:47.495   06:00:08 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:14:47.495  ************************************
00:14:47.495  
00:14:47.495  real	0m42.134s
00:14:47.495  user	1m55.037s
00:14:47.495  sys	0m7.560s
00:14:47.495   06:00:08 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:47.495  ************************************
00:14:47.495  END TEST nvme
00:14:47.495   06:00:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:47.495  ************************************
00:14:47.495   06:00:08  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:14:47.495   06:00:08  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:14:47.495   06:00:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:47.495   06:00:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:47.495   06:00:08  -- common/autotest_common.sh@10 -- # set +x
00:14:47.495  ************************************
00:14:47.495  START TEST nvme_scc
00:14:47.495  ************************************
00:14:47.495   06:00:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:14:47.495  * Looking for test storage...
00:14:47.496  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:47.496      06:00:08 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version
00:14:47.496      06:00:08 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@345 -- # : 1
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:47.496     06:00:08 nvme_scc -- scripts/common.sh@368 -- # return 0
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:47.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.496  		--rc genhtml_branch_coverage=1
00:14:47.496  		--rc genhtml_function_coverage=1
00:14:47.496  		--rc genhtml_legend=1
00:14:47.496  		--rc geninfo_all_blocks=1
00:14:47.496  		--rc geninfo_unexecuted_blocks=1
00:14:47.496  		
00:14:47.496  		'
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:47.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.496  		--rc genhtml_branch_coverage=1
00:14:47.496  		--rc genhtml_function_coverage=1
00:14:47.496  		--rc genhtml_legend=1
00:14:47.496  		--rc geninfo_all_blocks=1
00:14:47.496  		--rc geninfo_unexecuted_blocks=1
00:14:47.496  		
00:14:47.496  		'
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:47.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.496  		--rc genhtml_branch_coverage=1
00:14:47.496  		--rc genhtml_function_coverage=1
00:14:47.496  		--rc genhtml_legend=1
00:14:47.496  		--rc geninfo_all_blocks=1
00:14:47.496  		--rc geninfo_unexecuted_blocks=1
00:14:47.496  		
00:14:47.496  		'
00:14:47.496     06:00:08 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:47.496  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:47.496  		--rc genhtml_branch_coverage=1
00:14:47.496  		--rc genhtml_function_coverage=1
00:14:47.496  		--rc genhtml_legend=1
00:14:47.496  		--rc geninfo_all_blocks=1
00:14:47.496  		--rc geninfo_unexecuted_blocks=1
00:14:47.496  		
00:14:47.496  		'
00:14:47.496    06:00:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:14:47.496       06:00:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:14:47.496      06:00:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:14:47.496      06:00:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@4 -- # PATH=/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@5 -- # PATH=/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@6 -- # export PATH
00:14:47.496       06:00:08 nvme_scc -- paths/export.sh@7 -- # echo /opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/var/spdk/dependencies/pip/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:14:47.496     06:00:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:14:47.496    06:00:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:47.755    06:00:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:14:47.755   06:00:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:14:47.755   06:00:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:14:47.755   06:00:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:48.015  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:48.015  Waiting for block devices as requested
00:14:48.015  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:48.015   06:00:08 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:14:48.015   06:00:08 nvme_scc -- scripts/common.sh@18 -- # local i
00:14:48.015   06:00:08 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:14:48.015   06:00:08 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:14:48.015   06:00:08 nvme_scc -- scripts/common.sh@27 -- # return 0
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@18 -- # shift
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.015    06:00:08 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.015   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340               "'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340               '
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:14:48.016    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.016   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:14:48.017    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.017   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:14:48.279    06:00:08 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:14:48.279   06:00:08 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.279   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:14:48.279    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@18 -- # shift
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.280   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:14:48.280    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:14:48.281    06:00:09 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:14:48.281   06:00:09 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:14:48.282   06:00:09 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:14:48.282    06:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:14:48.282    06:00:09 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:14:48.282    06:00:09 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@192 -- # (( 1 == 0 ))
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:14:48.282      06:00:09 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:14:48.282     06:00:09 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:14:48.282    06:00:09 nvme_scc -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:14:48.282    06:00:09 nvme_scc -- nvme/functions.sh@208 -- # echo nvme0
00:14:48.282    06:00:09 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:14:48.282   06:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0
00:14:48.282   06:00:09 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:14:48.282   06:00:09 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:48.540  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:48.799  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:14:49.368   06:00:10 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:14:49.368   06:00:10 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:14:49.368   06:00:10 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:49.368   06:00:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:14:49.368  ************************************
00:14:49.368  START TEST nvme_simple_copy
00:14:49.368  ************************************
00:14:49.368   06:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:14:49.627  Initializing NVMe Controllers
00:14:49.627  Attaching to 0000:00:10.0
00:14:49.627  Controller supports SCC. Attached to 0000:00:10.0
00:14:49.627    Namespace ID: 1 size: 5GB
00:14:49.627  Initialization complete.
00:14:49.627  
00:14:49.627  Controller QEMU NVMe Ctrl       (12340               )
00:14:49.627  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:14:49.627  Namespace Block Size:4096
00:14:49.627  Writing LBAs 0 to 63 with Random Data
00:14:49.627  Copied LBAs from 0 - 63 to the Destination LBA 256
00:14:49.627  LBAs matching Written Data: 64
00:14:49.627  
00:14:49.627  real	0m0.260s
00:14:49.627  user	0m0.105s
00:14:49.627  sys	0m0.055s
00:14:49.627   06:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:49.627   06:00:10 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:14:49.627  ************************************
00:14:49.627  END TEST nvme_simple_copy
00:14:49.627  ************************************
00:14:49.627  
00:14:49.627  real	0m2.151s
00:14:49.627  user	0m0.690s
00:14:49.627  sys	0m1.420s
00:14:49.627   06:00:10 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:49.627   06:00:10 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:14:49.627  ************************************
00:14:49.627  END TEST nvme_scc
00:14:49.627  ************************************
00:14:49.627   06:00:10  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:14:49.627   06:00:10  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:14:49.627   06:00:10  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:14:49.627   06:00:10  -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]]
00:14:49.627   06:00:10  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:14:49.627   06:00:10  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:14:49.627   06:00:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:49.627   06:00:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:49.627   06:00:10  -- common/autotest_common.sh@10 -- # set +x
00:14:49.627  ************************************
00:14:49.627  START TEST nvme_rpc
00:14:49.627  ************************************
00:14:49.627   06:00:10 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:14:49.627  * Looking for test storage...
00:14:49.627  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:49.627    06:00:10 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:49.627     06:00:10 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:14:49.627     06:00:10 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:49.627    06:00:10 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:49.627    06:00:10 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:49.887     06:00:10 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:49.887    06:00:10 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:49.887  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.887  		--rc genhtml_branch_coverage=1
00:14:49.887  		--rc genhtml_function_coverage=1
00:14:49.887  		--rc genhtml_legend=1
00:14:49.887  		--rc geninfo_all_blocks=1
00:14:49.887  		--rc geninfo_unexecuted_blocks=1
00:14:49.887  		
00:14:49.887  		'
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:49.887  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.887  		--rc genhtml_branch_coverage=1
00:14:49.887  		--rc genhtml_function_coverage=1
00:14:49.887  		--rc genhtml_legend=1
00:14:49.887  		--rc geninfo_all_blocks=1
00:14:49.887  		--rc geninfo_unexecuted_blocks=1
00:14:49.887  		
00:14:49.887  		'
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:49.887  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.887  		--rc genhtml_branch_coverage=1
00:14:49.887  		--rc genhtml_function_coverage=1
00:14:49.887  		--rc genhtml_legend=1
00:14:49.887  		--rc geninfo_all_blocks=1
00:14:49.887  		--rc geninfo_unexecuted_blocks=1
00:14:49.887  		
00:14:49.887  		'
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:49.887  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.887  		--rc genhtml_branch_coverage=1
00:14:49.887  		--rc genhtml_function_coverage=1
00:14:49.887  		--rc genhtml_legend=1
00:14:49.887  		--rc geninfo_all_blocks=1
00:14:49.887  		--rc geninfo_unexecuted_blocks=1
00:14:49.887  		
00:14:49.887  		'
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:49.887    06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:49.887      06:00:10 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:49.887      06:00:10 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:49.887     06:00:10 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:49.887    06:00:10 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=89451
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:14:49.887   06:00:10 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 89451
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 89451 ']'
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:49.887  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:49.887   06:00:10 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:49.887  [2024-11-18 06:00:10.739626] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:49.887  [2024-11-18 06:00:10.739820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89451 ]
00:14:50.147  [2024-11-18 06:00:10.894689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:50.147  [2024-11-18 06:00:10.921664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:50.147  [2024-11-18 06:00:10.921716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:50.147   06:00:11 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:50.147   06:00:11 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:14:50.147   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:14:50.713  Nvme0n1
00:14:50.713   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:14:50.713   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:14:50.713  request:
00:14:50.713  {
00:14:50.713    "bdev_name": "Nvme0n1",
00:14:50.713    "filename": "non_existing_file",
00:14:50.713    "method": "bdev_nvme_apply_firmware",
00:14:50.713    "req_id": 1
00:14:50.713  }
00:14:50.713  Got JSON-RPC error response
00:14:50.713  response:
00:14:50.713  {
00:14:50.713    "code": -32603,
00:14:50.713    "message": "open file failed."
00:14:50.713  }
00:14:50.713   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:14:50.713   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:14:50.713   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:14:50.971   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:14:50.971   06:00:11 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 89451
00:14:50.971   06:00:11 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 89451 ']'
00:14:50.971   06:00:11 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 89451
00:14:50.971    06:00:11 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:14:50.971   06:00:11 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:50.971    06:00:11 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89451
00:14:51.230  killing process with pid 89451
00:14:51.230   06:00:11 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:51.230   06:00:11 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:51.230   06:00:11 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89451'
00:14:51.230   06:00:11 nvme_rpc -- common/autotest_common.sh@973 -- # kill 89451
00:14:51.230   06:00:11 nvme_rpc -- common/autotest_common.sh@978 -- # wait 89451
00:14:51.488  ************************************
00:14:51.488  END TEST nvme_rpc
00:14:51.488  ************************************
00:14:51.488  
00:14:51.488  real	0m1.797s
00:14:51.488  user	0m3.523s
00:14:51.488  sys	0m0.506s
00:14:51.488   06:00:12 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:51.488   06:00:12 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:14:51.488   06:00:12  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:14:51.488   06:00:12  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:51.488   06:00:12  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:51.488   06:00:12  -- common/autotest_common.sh@10 -- # set +x
00:14:51.488  ************************************
00:14:51.488  START TEST nvme_rpc_timeouts
00:14:51.488  ************************************
00:14:51.488   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:14:51.488  * Looking for test storage...
00:14:51.488  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:51.488    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:51.488     06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version
00:14:51.488     06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:51.488    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:14:51.488    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:51.488     06:00:12 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:14:51.747    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:14:51.747    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:51.747    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:51.747    06:00:12 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:14:51.747    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:51.747    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:51.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:51.747  		--rc genhtml_branch_coverage=1
00:14:51.747  		--rc genhtml_function_coverage=1
00:14:51.747  		--rc genhtml_legend=1
00:14:51.747  		--rc geninfo_all_blocks=1
00:14:51.747  		--rc geninfo_unexecuted_blocks=1
00:14:51.747  		
00:14:51.747  		'
00:14:51.747    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:51.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:51.747  		--rc genhtml_branch_coverage=1
00:14:51.747  		--rc genhtml_function_coverage=1
00:14:51.747  		--rc genhtml_legend=1
00:14:51.747  		--rc geninfo_all_blocks=1
00:14:51.747  		--rc geninfo_unexecuted_blocks=1
00:14:51.747  		
00:14:51.747  		'
00:14:51.747    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:51.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:51.747  		--rc genhtml_branch_coverage=1
00:14:51.747  		--rc genhtml_function_coverage=1
00:14:51.747  		--rc genhtml_legend=1
00:14:51.747  		--rc geninfo_all_blocks=1
00:14:51.747  		--rc geninfo_unexecuted_blocks=1
00:14:51.747  		
00:14:51.747  		'
00:14:51.747    06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:51.747  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:51.747  		--rc genhtml_branch_coverage=1
00:14:51.747  		--rc genhtml_function_coverage=1
00:14:51.747  		--rc genhtml_legend=1
00:14:51.747  		--rc geninfo_all_blocks=1
00:14:51.747  		--rc geninfo_unexecuted_blocks=1
00:14:51.747  		
00:14:51.747  		'
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_89498
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_89498
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=89530
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:14:51.747   06:00:12 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 89530
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 89530 ']'
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:51.747  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:51.747   06:00:12 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:14:51.747  [2024-11-18 06:00:12.532308] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:14:51.747  [2024-11-18 06:00:12.532501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89530 ]
00:14:51.747  [2024-11-18 06:00:12.678005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:51.747  [2024-11-18 06:00:12.699903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:51.747  [2024-11-18 06:00:12.699980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:52.683   06:00:13 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:52.683   06:00:13 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:14:52.683   06:00:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:14:52.683  Checking default timeout settings:
00:14:52.683   06:00:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:14:52.942  Making settings changes with rpc:
00:14:52.942   06:00:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:14:52.943   06:00:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:14:53.202  Check default vs. modified settings:
00:14:53.202   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:14:53.202   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:14:53.771  Setting action_on_timeout is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:14:53.771  Setting timeout_us is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_89498
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:14:53.771  Setting timeout_admin_us is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_89498 /tmp/settings_modified_89498
00:14:53.771   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 89530
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 89530 ']'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 89530
00:14:53.771    06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:53.771    06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89530
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:53.771  killing process with pid 89530
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89530'
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 89530
00:14:53.771   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 89530
00:14:54.030  RPC TIMEOUT SETTING TEST PASSED.
00:14:54.030   06:00:14 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:14:54.030  
00:14:54.030  real	0m2.558s
00:14:54.030  user	0m5.334s
00:14:54.030  sys	0m0.547s
00:14:54.030   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:54.030  ************************************
00:14:54.030  END TEST nvme_rpc_timeouts
00:14:54.030  ************************************
00:14:54.030   06:00:14 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:14:54.030    06:00:14  -- spdk/autotest.sh@239 -- # uname -s
00:14:54.030   06:00:14  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:14:54.030   06:00:14  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:14:54.030   06:00:14  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:54.030   06:00:14  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:54.030   06:00:14  -- common/autotest_common.sh@10 -- # set +x
00:14:54.030  ************************************
00:14:54.030  START TEST sw_hotplug
00:14:54.030  ************************************
00:14:54.030   06:00:14 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:14:54.030  * Looking for test storage...
00:14:54.030  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:54.030    06:00:14 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:54.030     06:00:14 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version
00:14:54.030     06:00:14 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:54.289     06:00:15 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:54.289    06:00:15 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:54.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.289  		--rc genhtml_branch_coverage=1
00:14:54.289  		--rc genhtml_function_coverage=1
00:14:54.289  		--rc genhtml_legend=1
00:14:54.289  		--rc geninfo_all_blocks=1
00:14:54.289  		--rc geninfo_unexecuted_blocks=1
00:14:54.289  		
00:14:54.289  		'
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:54.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.289  		--rc genhtml_branch_coverage=1
00:14:54.289  		--rc genhtml_function_coverage=1
00:14:54.289  		--rc genhtml_legend=1
00:14:54.289  		--rc geninfo_all_blocks=1
00:14:54.289  		--rc geninfo_unexecuted_blocks=1
00:14:54.289  		
00:14:54.289  		'
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:54.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.289  		--rc genhtml_branch_coverage=1
00:14:54.289  		--rc genhtml_function_coverage=1
00:14:54.289  		--rc genhtml_legend=1
00:14:54.289  		--rc geninfo_all_blocks=1
00:14:54.289  		--rc geninfo_unexecuted_blocks=1
00:14:54.289  		
00:14:54.289  		'
00:14:54.289    06:00:15 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:54.289  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:54.289  		--rc genhtml_branch_coverage=1
00:14:54.289  		--rc genhtml_function_coverage=1
00:14:54.289  		--rc genhtml_legend=1
00:14:54.289  		--rc geninfo_all_blocks=1
00:14:54.289  		--rc geninfo_unexecuted_blocks=1
00:14:54.289  		
00:14:54.289  		'
00:14:54.289   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:54.549  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:54.549  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:14:55.117    06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@233 -- # local class
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:14:55.117       06:00:15 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:14:55.117       06:00:15 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:14:55.117       06:00:15 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:14:55.117      06:00:15 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@18 -- # local i
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:14:55.117     06:00:15 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@328 -- # (( 1 ))
00:14:55.117    06:00:15 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:14:55.117   06:00:15 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:55.415  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:55.415  Waiting for block devices as requested
00:14:55.415  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:55.694   06:00:16 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0
00:14:55.694   06:00:16 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:55.953  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:14:55.953  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:14:55.953  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:14:56.521   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:14:56.521   06:00:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=90037
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:14:56.779   06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning
00:14:56.779    06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:14:56.779    06:00:17 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:14:56.779    06:00:17 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:14:56.779    06:00:17 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:14:56.779    06:00:17 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:14:56.780     06:00:17 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:14:56.780     06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:14:56.780     06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:14:56.780     06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:14:56.780     06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:14:56.780     06:00:17 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:14:56.780  Initializing NVMe Controllers
00:14:56.780  Attaching to 0000:00:10.0
00:14:56.780  Attached to 0000:00:10.0
00:14:57.038  Initialization complete. Starting I/O...
00:14:57.038  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:14:57.038  
00:14:57.975  QEMU NVMe Ctrl       (12340               ):       2040 I/Os completed (+2040)
00:14:57.975  
00:14:58.911  QEMU NVMe Ctrl       (12340               ):       4728 I/Os completed (+2688)
00:14:58.911  
00:14:59.850  QEMU NVMe Ctrl       (12340               ):       7764 I/Os completed (+3036)
00:14:59.850  
00:15:00.787  QEMU NVMe Ctrl       (12340               ):      10836 I/Os completed (+3072)
00:15:00.787  
00:15:02.164  QEMU NVMe Ctrl       (12340               ):      13936 I/Os completed (+3100)
00:15:02.164  
00:15:02.732     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:02.732     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:02.732     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:02.732  [2024-11-18 06:00:23.579110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:02.732  Controller removed: QEMU NVMe Ctrl       (12340               )
00:15:02.732  [2024-11-18 06:00:23.580525] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.580607] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.580631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.580651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:02.732  [2024-11-18 06:00:23.582556] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.582615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.582636] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  [2024-11-18 06:00:23.582655] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:02.732  EAL: eal_parse_sysfs_value(): cannot read sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device
00:15:02.732     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:15:02.733     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:02.733  EAL: Scan for (pci) bus failed.
00:15:02.733     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:02.733     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:02.733     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:02.991     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:02.991  
00:15:02.991     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:02.991     06:00:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:02.991  Attaching to 0000:00:10.0
00:15:02.991  Attached to 0000:00:10.0
00:15:03.929  QEMU NVMe Ctrl       (12340               ):       2984 I/Os completed (+2984)
00:15:03.929  
00:15:04.866  QEMU NVMe Ctrl       (12340               ):       6056 I/Os completed (+3072)
00:15:04.866  
00:15:05.803  QEMU NVMe Ctrl       (12340               ):       9128 I/Os completed (+3072)
00:15:05.803  
00:15:07.181  QEMU NVMe Ctrl       (12340               ):      12038 I/Os completed (+2910)
00:15:07.181  
00:15:08.118  QEMU NVMe Ctrl       (12340               ):      15010 I/Os completed (+2972)
00:15:08.118  
00:15:09.067  QEMU NVMe Ctrl       (12340               ):      17862 I/Os completed (+2852)
00:15:09.067  
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:09.067  [2024-11-18 06:00:29.790012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:09.067  Controller removed: QEMU NVMe Ctrl       (12340               )
00:15:09.067  [2024-11-18 06:00:29.791335] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.791404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.791436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.791457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:09.067  [2024-11-18 06:00:29.793272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.793314] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.793340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067  [2024-11-18 06:00:29.793359] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:09.067     06:00:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:09.067  Attaching to 0000:00:10.0
00:15:09.067  Attached to 0000:00:10.0
00:15:10.090  QEMU NVMe Ctrl       (12340               ):       2063 I/Os completed (+2063)
00:15:10.090  
00:15:11.027  QEMU NVMe Ctrl       (12340               ):       4831 I/Os completed (+2768)
00:15:11.027  
00:15:11.966  QEMU NVMe Ctrl       (12340               ):       8015 I/Os completed (+3184)
00:15:11.966  
00:15:12.904  QEMU NVMe Ctrl       (12340               ):      11032 I/Os completed (+3017)
00:15:12.904  
00:15:13.839  QEMU NVMe Ctrl       (12340               ):      13988 I/Os completed (+2956)
00:15:13.839  
00:15:15.215  QEMU NVMe Ctrl       (12340               ):      17020 I/Os completed (+3032)
00:15:15.215  
00:15:15.215     06:00:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:15:15.215     06:00:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:15.215     06:00:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:15.215     06:00:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:15.215  [2024-11-18 06:00:35.997712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:15.215  Controller removed: QEMU NVMe Ctrl       (12340               )
00:15:15.215  [2024-11-18 06:00:35.999164] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:35.999266] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:35.999290] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:35.999316] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:15.215  [2024-11-18 06:00:36.001319] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:36.001364] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:36.001390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215  [2024-11-18 06:00:36.001412] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:15.215     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:15.474     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:15.474     06:00:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:15.474  Attaching to 0000:00:10.0
00:15:15.474  Attached to 0000:00:10.0
00:15:15.474  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:15.474  [2024-11-18 06:00:36.212053] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:15:22.036     06:00:42 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:15:22.036     06:00:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:22.036    06:00:42 sw_hotplug -- common/autotest_common.sh@719 -- # time=24.63
00:15:22.036    06:00:42 sw_hotplug -- common/autotest_common.sh@720 -- # echo 24.63
00:15:22.036    06:00:42 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:15:22.036   06:00:42 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.63
00:15:22.036   06:00:42 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.63 1
00:15:22.036  remove_attach_helper took 24.63s to complete (handling 1 nvme drive(s)) 06:00:42 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 90037
00:15:27.415  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (90037) - No such process
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 90037
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=90390
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 90390
00:15:27.415   06:00:48 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 90390 ']'
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:27.415  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:27.415   06:00:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:27.415  [2024-11-18 06:00:48.289187] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 22.11.4 initialization...
00:15:27.415  [2024-11-18 06:00:48.289400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90390 ]
00:15:27.674  [2024-11-18 06:00:48.449911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:15:27.674  [2024-11-18 06:00:48.476200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:28.243   06:00:49 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:28.243   06:00:49 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:15:28.243   06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:15:28.243   06:00:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:28.243   06:00:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:28.243   06:00:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:28.243   06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:15:28.243   06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:15:28.243    06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:15:28.243    06:00:49 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:15:28.243    06:00:49 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:15:28.243    06:00:49 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:15:28.243    06:00:49 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:15:28.243     06:00:49 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:15:28.243     06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:15:28.243     06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:15:28.243     06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:15:28.243     06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:15:28.243     06:00:49 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:34.807       06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:34.807       06:00:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.807       06:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:34.807       06:00:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:15:34.807  [2024-11-18 06:00:55.273093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:34.807  [2024-11-18 06:00:55.275032] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:34.807  [2024-11-18 06:00:55.275090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:34.807  [2024-11-18 06:00:55.275116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:34.807  [2024-11-18 06:00:55.275138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:34.807  [2024-11-18 06:00:55.275156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:34.807  [2024-11-18 06:00:55.275173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:34.807  [2024-11-18 06:00:55.275193] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:34.807  [2024-11-18 06:00:55.275221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:34.807  [2024-11-18 06:00:55.275236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:34.807  [2024-11-18 06:00:55.275249] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:34.807  [2024-11-18 06:00:55.275264] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:34.807  [2024-11-18 06:00:55.275277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:15:34.807     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:34.807      06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:34.807       06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:34.807       06:00:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:34.808       06:00:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:34.808       06:00:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:34.808     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:15:34.808     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:35.067     06:00:55 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:15:41.641      06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:15:41.641      06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:41.641      06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:41.641       06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:41.641       06:01:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:41.641       06:01:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:41.641       06:01:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:41.641  [2024-11-18 06:01:01.973194] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:41.641  [2024-11-18 06:01:01.975459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:41.641  [2024-11-18 06:01:01.975535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:41.641  [2024-11-18 06:01:01.975559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:41.641  [2024-11-18 06:01:01.975692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:41.641  [2024-11-18 06:01:01.975712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:41.641  [2024-11-18 06:01:01.975730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:41.641  [2024-11-18 06:01:01.975745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:41.641  [2024-11-18 06:01:01.975776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:41.641  [2024-11-18 06:01:01.975794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:41.641  [2024-11-18 06:01:01.975815] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:41.641  [2024-11-18 06:01:01.975829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:41.641  [2024-11-18 06:01:01.975846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:15:41.641     06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:15:41.641      06:01:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:15:41.641      06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:41.641      06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:41.641       06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:41.641       06:01:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:41.641       06:01:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:41.641       06:01:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:41.641     06:01:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:48.210       06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:48.210      06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:48.210       06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:48.210  [2024-11-18 06:01:08.273334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:48.210  [2024-11-18 06:01:08.275408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:48.210  [2024-11-18 06:01:08.275494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:48.210  [2024-11-18 06:01:08.275522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:48.210  [2024-11-18 06:01:08.275544] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:48.210  [2024-11-18 06:01:08.275563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:48.210  [2024-11-18 06:01:08.275578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:48.210  [2024-11-18 06:01:08.275595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:48.210  [2024-11-18 06:01:08.275613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:48.210  [2024-11-18 06:01:08.275630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:48.210  [2024-11-18 06:01:08.275645] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:48.210  [2024-11-18 06:01:08.275679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:15:48.210  [2024-11-18 06:01:08.275694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:48.210       06:01:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:48.210     06:01:08 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:53.480     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:15:53.480     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:15:53.480      06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:15:53.480      06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:15:53.480      06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:15:53.480       06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:15:53.480       06:01:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.480       06:01:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:53.740       06:01:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@719 -- # time=25.30
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@720 -- # echo 25.30
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:15:53.740   06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.30
00:15:53.740   06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.30 1
00:15:53.740  remove_attach_helper took 25.30s to complete (handling 1 nvme drive(s)) 06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.740   06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:53.740   06:01:14 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:53.740   06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:15:53.740   06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:15:53.740    06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:15:53.740    06:01:14 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:15:53.740     06:01:14 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:15:53.740     06:01:14 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:16:00.376     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:00.376     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:00.376     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:00.376     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:00.376     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:00.376      06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:00.376      06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:00.376      06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:00.377       06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:00.377       06:01:20 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.377       06:01:20 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:00.377       06:01:20 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.377     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:16:00.377     06:01:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:16:00.377  [2024-11-18 06:01:20.600303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:00.377  [2024-11-18 06:01:20.602648] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:00.377  [2024-11-18 06:01:20.602724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:00.377  [2024-11-18 06:01:20.602747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:00.377  [2024-11-18 06:01:20.602788] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:00.377  [2024-11-18 06:01:20.602806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:00.377  [2024-11-18 06:01:20.602831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:00.377  [2024-11-18 06:01:20.602846] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:00.377  [2024-11-18 06:01:20.602864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:00.377  [2024-11-18 06:01:20.602878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:00.377  [2024-11-18 06:01:20.602895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:00.377  [2024-11-18 06:01:20.602909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:00.377  [2024-11-18 06:01:20.602925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:00.377      06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:00.377      06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:00.377      06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:00.377       06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:00.377       06:01:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:00.377       06:01:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:00.377       06:01:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:00.377     06:01:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:06.942      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:06.942      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:06.942      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:06.942       06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:06.942       06:01:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.942       06:01:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:06.942       06:01:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:06.942     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:06.942  [2024-11-18 06:01:27.300372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:06.942  [2024-11-18 06:01:27.302456] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:06.942  [2024-11-18 06:01:27.302516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:06.943  [2024-11-18 06:01:27.302542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:06.943  [2024-11-18 06:01:27.302563] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:06.943  [2024-11-18 06:01:27.302579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:06.943  [2024-11-18 06:01:27.302593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:06.943  [2024-11-18 06:01:27.302609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:06.943  [2024-11-18 06:01:27.302623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:06.943  [2024-11-18 06:01:27.302638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:06.943  [2024-11-18 06:01:27.302651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:06.943  [2024-11-18 06:01:27.302666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:06.943  [2024-11-18 06:01:27.302679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:06.943      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:06.943      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:06.943       06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:06.943      06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:06.943       06:01:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:06.943       06:01:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:06.943       06:01:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:06.943     06:01:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:13.508       06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:13.508       06:01:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.508       06:01:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:13.508       06:01:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:13.508     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:13.508      06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:13.508       06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:13.508       06:01:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.508       06:01:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:13.509       06:01:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.509     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:16:13.509     06:01:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:16:13.509  [2024-11-18 06:01:33.600492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:13.509  [2024-11-18 06:01:33.602562] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:13.509  [2024-11-18 06:01:33.602629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:13.509  [2024-11-18 06:01:33.602651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:13.509  [2024-11-18 06:01:33.602673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:13.509  [2024-11-18 06:01:33.602687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:13.509  [2024-11-18 06:01:33.602706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:13.509  [2024-11-18 06:01:33.602720] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:13.509  [2024-11-18 06:01:33.602736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:13.509  [2024-11-18 06:01:33.602749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:13.509  [2024-11-18 06:01:33.602764] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:13.509  [2024-11-18 06:01:33.602824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:13.509  [2024-11-18 06:01:33.602859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:13.509      06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:13.509      06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:13.509       06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:13.509      06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:13.509       06:01:34 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:13.509       06:01:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:13.509       06:01:34 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:13.509     06:01:34 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:20.074     06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:20.074     06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:20.074      06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:20.074      06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:20.074      06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:20.074       06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:20.074       06:01:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:20.074       06:01:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:20.074       06:01:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:20.074     06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:20.074     06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:20.074    06:01:40 sw_hotplug -- common/autotest_common.sh@719 -- # time=25.81
00:16:20.074    06:01:40 sw_hotplug -- common/autotest_common.sh@720 -- # echo 25.81
00:16:20.074    06:01:40 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:16:20.074   06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.81
00:16:20.074   06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.81 1
00:16:20.074  remove_attach_helper took 25.81s to complete (handling 1 nvme drive(s)) 06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:16:20.074   06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 90390
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 90390 ']'
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 90390
00:16:20.074    06:01:40 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:16:20.074    06:01:40 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90390
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:16:20.074  killing process with pid 90390
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90390'
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@973 -- # kill 90390
00:16:20.074   06:01:40 sw_hotplug -- common/autotest_common.sh@978 -- # wait 90390
00:16:20.074   06:01:40 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:16:20.074  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:16:20.074  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:16:20.642  
00:16:20.642  real	1m26.653s
00:16:20.642  user	1m1.796s
00:16:20.642  sys	0m14.940s
00:16:20.642   06:01:41 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:16:20.642   06:01:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:20.642  ************************************
00:16:20.642  END TEST sw_hotplug
00:16:20.642  ************************************
00:16:20.642   06:01:41  -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]]
00:16:20.642   06:01:41  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:16:20.642   06:01:41  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:16:20.642   06:01:41  -- spdk/autotest.sh@260 -- # timing_exit lib
00:16:20.642   06:01:41  -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:20.642   06:01:41  -- common/autotest_common.sh@10 -- # set +x
00:16:20.900   06:01:41  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:16:20.900   06:01:41  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:16:20.901   06:01:41  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:16:20.901   06:01:41  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:16:20.901   06:01:41  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:16:20.901   06:01:41  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:16:20.901   06:01:41  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:16:20.901   06:01:41  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:16:20.901   06:01:41  -- common/autotest_common.sh@726 -- # xtrace_disable
00:16:20.901   06:01:41  -- common/autotest_common.sh@10 -- # set +x
00:16:20.901   06:01:41  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:16:20.901   06:01:41  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:16:20.901   06:01:41  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:16:20.901   06:01:41  -- common/autotest_common.sh@10 -- # set +x
00:16:22.805  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:16:22.805  Waiting for block devices as requested
00:16:22.805  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:16:23.373  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15,mount@vda:vda16, so not binding PCI dev
00:16:23.373  Cleaning
00:16:23.373  Removing:    /var/run/dpdk/spdk0/config
00:16:23.373  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:16:23.373  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:16:23.373  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:16:23.373  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:16:23.373  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:16:23.373  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:16:23.373  Removing:    /dev/shm/spdk_tgt_trace.pid77780
00:16:23.373  Removing:    /var/run/dpdk/spdk0
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid77622
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid77780
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid77982
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78064
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78085
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78191
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78209
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78352
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78595
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78764
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78840
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid78925
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79012
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79085
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79119
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79161
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79226
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79321
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79799
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79854
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79893
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79900
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79965
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid79979
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80037
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80051
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80093
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80104
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80150
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80156
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80289
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80324
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80356
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80435
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80588
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80631
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid80656
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid81812
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid81995
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82164
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82251
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82349
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82391
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82417
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82442
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82848
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82912
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid82997
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83031
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83159
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83189
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83230
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83267
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83398
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83528
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid83748
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid84004
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid84023
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid84055
00:16:23.373  Removing:    /var/run/dpdk/spdk_pid84068
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84083
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84102
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84116
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84130
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84149
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84168
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84177
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84196
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84215
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84224
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84243
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84262
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84271
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84290
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84308
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84318
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84354
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84368
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84396
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84463
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84490
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84501
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84530
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84541
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84545
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84591
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84598
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84631
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84634
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84648
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84651
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84660
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84668
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84676
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84685
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84707
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84740
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84745
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84779
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84785
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84794
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84835
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84848
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84874
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84884
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84887
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84901
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84904
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84912
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84921
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid84924
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85007
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85060
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85173
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85183
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85216
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85261
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85282
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85303
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85313
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85349
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85370
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85446
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85482
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85520
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85752
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85853
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85882
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85911
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85947
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid85980
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86016
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86043
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86142
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86193
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86223
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86443
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86519
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86606
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86643
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86663
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid86747
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87124
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87150
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87431
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87507
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87595
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87631
00:16:23.640  Removing:    /var/run/dpdk/spdk_pid87657
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid87681
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid88852
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid88967
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid88975
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid88993
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid89451
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid89530
00:16:23.920  Removing:    /var/run/dpdk/spdk_pid90390
00:16:23.920  Clean
00:16:23.920   06:01:44  -- common/autotest_common.sh@1453 -- # return 0
00:16:23.920   06:01:44  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:16:23.920   06:01:44  -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:23.920   06:01:44  -- common/autotest_common.sh@10 -- # set +x
00:16:23.920   06:01:44  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:16:23.920   06:01:44  -- common/autotest_common.sh@732 -- # xtrace_disable
00:16:23.920   06:01:44  -- common/autotest_common.sh@10 -- # set +x
00:16:23.920   06:01:44  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:16:23.920   06:01:44  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:16:23.920   06:01:44  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:16:23.920   06:01:44  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:16:23.920    06:01:44  -- spdk/autotest.sh@398 -- # hostname
00:16:23.920   06:01:44  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2404-cloud-1720510786-2314 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:16:24.191  geninfo: WARNING: invalid characters removed from testname!
00:17:31.882   06:02:42  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:31.882   06:02:49  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:31.882   06:02:52  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:36.071   06:02:56  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:39.362   06:02:59  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:43.545   06:03:03  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:17:46.831   06:03:07  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:17:46.831   06:03:07  -- spdk/autorun.sh@1 -- $ timing_finish
00:17:46.831   06:03:07  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:17:46.831   06:03:07  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:17:46.831   06:03:07  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:17:46.831   06:03:07  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:17:46.831  + [[ -n 2553 ]]
00:17:46.831  + sudo kill 2553
00:17:46.839  [Pipeline] }
00:17:46.854  [Pipeline] // timeout
00:17:46.858  [Pipeline] }
00:17:46.871  [Pipeline] // stage
00:17:46.876  [Pipeline] }
00:17:46.889  [Pipeline] // catchError
00:17:46.897  [Pipeline] stage
00:17:46.899  [Pipeline] { (Stop VM)
00:17:46.910  [Pipeline] sh
00:17:47.189  + vagrant halt
00:17:51.377  ==> default: Halting domain...
00:18:01.423  [Pipeline] sh
00:18:01.703  + vagrant destroy -f
00:18:04.988  ==> default: Removing domain...
00:18:05.258  [Pipeline] sh
00:18:05.537  + mv output /var/jenkins/workspace/ubuntu24-vg-autotest/output
00:18:05.545  [Pipeline] }
00:18:05.559  [Pipeline] // stage
00:18:05.563  [Pipeline] }
00:18:05.576  [Pipeline] // dir
00:18:05.581  [Pipeline] }
00:18:05.593  [Pipeline] // wrap
00:18:05.599  [Pipeline] }
00:18:05.610  [Pipeline] // catchError
00:18:05.618  [Pipeline] stage
00:18:05.620  [Pipeline] { (Epilogue)
00:18:05.631  [Pipeline] sh
00:18:05.911  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:18:27.852  [Pipeline] catchError
00:18:27.854  [Pipeline] {
00:18:27.869  [Pipeline] sh
00:18:28.153  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:18:28.411  Artifacts sizes are good
00:18:28.418  [Pipeline] }
00:18:28.428  [Pipeline] // catchError
00:18:28.436  [Pipeline] archiveArtifacts
00:18:28.440  Archiving artifacts
00:18:28.747  [Pipeline] cleanWs
00:18:28.759  [WS-CLEANUP] Deleting project workspace...
00:18:28.759  [WS-CLEANUP] Deferred wipeout is used...
00:18:28.765  [WS-CLEANUP] done
00:18:28.767  [Pipeline] }
00:18:28.784  [Pipeline] // stage
00:18:28.790  [Pipeline] }
00:18:28.805  [Pipeline] // node
00:18:28.811  [Pipeline] End of Pipeline
00:18:28.866  Finished: SUCCESS