00:00:00.001  Started by upstream project "autotest-spdk-master-vs-dpdk-main" build number 4058
00:00:00.001  originally caused by:
00:00:00.001   Started by upstream project "nightly-trigger" build number 3648
00:00:00.001   originally caused by:
00:00:00.001    Started by timer
00:00:00.119  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.120  The recommended git tool is: git
00:00:00.120  using credential 00000000-0000-0000-0000-000000000002
00:00:00.123   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/ubuntu22-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.173  Fetching changes from the remote Git repository
00:00:00.176   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.226  Using shallow fetch with depth 1
00:00:00.226  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.226   > git --version # timeout=10
00:00:00.263   > git --version # 'git version 2.39.2'
00:00:00.263  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.295  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.295   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.505   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.515   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.526  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.526   > git config core.sparsecheckout # timeout=10
00:00:07.538   > git read-tree -mu HEAD # timeout=10
00:00:07.555   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.581  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.581   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.666  [Pipeline] Start of Pipeline
00:00:07.680  [Pipeline] library
00:00:07.682  Loading library shm_lib@master
00:00:07.682  Library shm_lib@master is cached. Copying from home.
00:00:07.695  [Pipeline] node
00:00:07.714  Running on VM-host-SM16 in /var/jenkins/workspace/ubuntu22-vg-autotest
00:00:07.716  [Pipeline] {
00:00:07.724  [Pipeline] catchError
00:00:07.725  [Pipeline] {
00:00:07.735  [Pipeline] wrap
00:00:07.742  [Pipeline] {
00:00:07.750  [Pipeline] stage
00:00:07.752  [Pipeline] { (Prologue)
00:00:07.772  [Pipeline] echo
00:00:07.774  Node: VM-host-SM16
00:00:07.781  [Pipeline] cleanWs
00:00:07.791  [WS-CLEANUP] Deleting project workspace...
00:00:07.791  [WS-CLEANUP] Deferred wipeout is used...
00:00:07.797  [WS-CLEANUP] done
00:00:07.981  [Pipeline] setCustomBuildProperty
00:00:08.048  [Pipeline] httpRequest
00:00:08.339  [Pipeline] echo
00:00:08.341  Sorcerer 10.211.164.20 is alive
00:00:08.350  [Pipeline] retry
00:00:08.352  [Pipeline] {
00:00:08.363  [Pipeline] httpRequest
00:00:08.367  HttpMethod: GET
00:00:08.368  URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.369  Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.382  Response Code: HTTP/1.1 200 OK
00:00:08.383  Success: Status code 200 is in the accepted range: 200,404
00:00:08.384  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.388  [Pipeline] }
00:00:10.409  [Pipeline] // retry
00:00:10.417  [Pipeline] sh
00:00:10.697  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.714  [Pipeline] httpRequest
00:00:11.063  [Pipeline] echo
00:00:11.065  Sorcerer 10.211.164.20 is alive
00:00:11.074  [Pipeline] retry
00:00:11.076  [Pipeline] {
00:00:11.090  [Pipeline] httpRequest
00:00:11.095  HttpMethod: GET
00:00:11.096  URL: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz
00:00:11.096  Sending request to url: http://10.211.164.20/packages/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz
00:00:11.117  Response Code: HTTP/1.1 200 OK
00:00:11.118  Success: Status code 200 is in the accepted range: 200,404
00:00:11.118  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz
00:02:04.868  [Pipeline] }
00:02:04.889  [Pipeline] // retry
00:02:04.898  [Pipeline] sh
00:02:05.183  + tar --no-same-owner -xf spdk_f22e807f197b361787d55ef3f148db33139db671.tar.gz
00:02:07.736  [Pipeline] sh
00:02:08.017  + git -C spdk log --oneline -n5
00:02:08.017  f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb
00:02:08.017  8d982eda9 dpdk: add adjustments for recent rte_power changes
00:02:08.017  dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option
00:02:08.017  73f18e890 lib/reduce: fix the magic number of empty mapping detection.
00:02:08.017  029355612 bdev_ut: add manual examine bdev unit test case
00:02:08.038  [Pipeline] withCredentials
00:02:08.049   > git --version # timeout=10
00:02:08.064   > git --version # 'git version 2.39.2'
00:02:08.080  Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS
00:02:08.082  [Pipeline] {
00:02:08.091  [Pipeline] retry
00:02:08.093  [Pipeline] {
00:02:08.109  [Pipeline] sh
00:02:08.389  + git ls-remote http://dpdk.org/git/dpdk main
00:02:08.660  [Pipeline] }
00:02:08.678  [Pipeline] // retry
00:02:08.683  [Pipeline] }
00:02:08.700  [Pipeline] // withCredentials
00:02:08.710  [Pipeline] httpRequest
00:02:09.023  [Pipeline] echo
00:02:09.025  Sorcerer 10.211.164.20 is alive
00:02:09.036  [Pipeline] retry
00:02:09.038  [Pipeline] {
00:02:09.052  [Pipeline] httpRequest
00:02:09.057  HttpMethod: GET
00:02:09.057  URL: http://10.211.164.20/packages/dpdk_0c0cd5ffb0f7fc085b54d91e15cc6d0f0fdf9921.tar.gz
00:02:09.058  Sending request to url: http://10.211.164.20/packages/dpdk_0c0cd5ffb0f7fc085b54d91e15cc6d0f0fdf9921.tar.gz
00:02:09.058  Response Code: HTTP/1.1 200 OK
00:02:09.059  Success: Status code 200 is in the accepted range: 200,404
00:02:09.059  Saving response body to /var/jenkins/workspace/ubuntu22-vg-autotest/dpdk_0c0cd5ffb0f7fc085b54d91e15cc6d0f0fdf9921.tar.gz
00:02:10.174  [Pipeline] }
00:02:10.195  [Pipeline] // retry
00:02:10.203  [Pipeline] sh
00:02:10.488  + tar --no-same-owner -xf dpdk_0c0cd5ffb0f7fc085b54d91e15cc6d0f0fdf9921.tar.gz
00:02:11.962  [Pipeline] sh
00:02:12.242  + git -C dpdk log --oneline -n5
00:02:12.242  0c0cd5ffb0 version: 24.11-rc3
00:02:12.242  8c9a7471a0 dts: add checksum offload test suite
00:02:12.242  bee7cf823c dts: add checksum offload to testpmd shell
00:02:12.242  2eef9a80df dts: add dynamic queue test suite
00:02:12.242  c986c3393e dts: add testpmd port queue modification
00:02:12.261  [Pipeline] writeFile
00:02:12.277  [Pipeline] sh
00:02:12.558  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:02:12.570  [Pipeline] sh
00:02:12.850  + cat autorun-spdk.conf
00:02:12.850  SPDK_TEST_UNITTEST=1
00:02:12.850  SPDK_RUN_FUNCTIONAL_TEST=1
00:02:12.850  SPDK_TEST_NVME=1
00:02:12.850  SPDK_TEST_BLOCKDEV=1
00:02:12.850  SPDK_RUN_ASAN=1
00:02:12.850  SPDK_RUN_UBSAN=1
00:02:12.850  SPDK_TEST_NATIVE_DPDK=main
00:02:12.850  SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:12.850  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:12.857  RUN_NIGHTLY=1
00:02:12.859  [Pipeline] }
00:02:12.873  [Pipeline] // stage
00:02:12.889  [Pipeline] stage
00:02:12.891  [Pipeline] { (Run VM)
00:02:12.904  [Pipeline] sh
00:02:13.184  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:02:13.184  + echo 'Start stage prepare_nvme.sh'
00:02:13.184  Start stage prepare_nvme.sh
00:02:13.184  + [[ -n 1 ]]
00:02:13.184  + disk_prefix=ex1
00:02:13.184  + [[ -n /var/jenkins/workspace/ubuntu22-vg-autotest ]]
00:02:13.184  + [[ -e /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf ]]
00:02:13.184  + source /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf
00:02:13.184  ++ SPDK_TEST_UNITTEST=1
00:02:13.184  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:13.184  ++ SPDK_TEST_NVME=1
00:02:13.184  ++ SPDK_TEST_BLOCKDEV=1
00:02:13.185  ++ SPDK_RUN_ASAN=1
00:02:13.185  ++ SPDK_RUN_UBSAN=1
00:02:13.185  ++ SPDK_TEST_NATIVE_DPDK=main
00:02:13.185  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:13.185  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:13.185  ++ RUN_NIGHTLY=1
00:02:13.185  + cd /var/jenkins/workspace/ubuntu22-vg-autotest
00:02:13.185  + nvme_files=()
00:02:13.185  + declare -A nvme_files
00:02:13.185  + backend_dir=/var/lib/libvirt/images/backends
00:02:13.185  + nvme_files['nvme.img']=5G
00:02:13.185  + nvme_files['nvme-cmb.img']=5G
00:02:13.185  + nvme_files['nvme-multi0.img']=4G
00:02:13.185  + nvme_files['nvme-multi1.img']=4G
00:02:13.185  + nvme_files['nvme-multi2.img']=4G
00:02:13.185  + nvme_files['nvme-openstack.img']=8G
00:02:13.185  + nvme_files['nvme-zns.img']=5G
00:02:13.185  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:02:13.185  + ((  SPDK_TEST_FTL == 1  ))
00:02:13.185  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:02:13.185  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G
00:02:13.185  Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:02:13.185  + for nvme in "${!nvme_files[@]}"
00:02:13.185  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G
00:02:13.753  Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:02:13.753  ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu
00:02:13.753  + echo 'End stage prepare_nvme.sh'
00:02:13.753  End stage prepare_nvme.sh
00:02:13.765  [Pipeline] sh
00:02:14.049  + DISTRO=ubuntu2204 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:02:14.049  Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f ubuntu2204
00:02:14.049  
00:02:14.049  DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant
00:02:14.049  SPDK_DIR=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk
00:02:14.049  VAGRANT_TARGET=/var/jenkins/workspace/ubuntu22-vg-autotest
00:02:14.049  HELP=0
00:02:14.049  DRY_RUN=0
00:02:14.049  NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img,
00:02:14.049  NVME_DISKS_TYPE=nvme,
00:02:14.049  NVME_AUTO_CREATE=0
00:02:14.049  NVME_DISKS_NAMESPACES=,
00:02:14.049  NVME_CMB=,
00:02:14.049  NVME_PMR=,
00:02:14.049  NVME_ZNS=,
00:02:14.049  NVME_MS=,
00:02:14.049  NVME_FDP=,
00:02:14.049  SPDK_VAGRANT_DISTRO=ubuntu2204
00:02:14.049  SPDK_VAGRANT_VMCPU=10
00:02:14.049  SPDK_VAGRANT_VMRAM=12288
00:02:14.049  SPDK_VAGRANT_PROVIDER=libvirt
00:02:14.049  SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911
00:02:14.049  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:02:14.049  SPDK_OPENSTACK_NETWORK=0
00:02:14.049  VAGRANT_PACKAGE_BOX=0
00:02:14.049  VAGRANTFILE=/var/jenkins/workspace/ubuntu22-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:02:14.049  FORCE_DISTRO=true
00:02:14.049  VAGRANT_BOX_VERSION=
00:02:14.049  EXTRA_VAGRANTFILES=
00:02:14.049  NIC_MODEL=e1000
00:02:14.049  
00:02:14.049  mkdir: created directory '/var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt'
00:02:14.049  /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt /var/jenkins/workspace/ubuntu22-vg-autotest
00:02:16.601  Bringing machine 'default' up with 'libvirt' provider...
00:02:16.859  ==> default: Creating image (snapshot of base box volume).
00:02:17.118  ==> default: Creating domain with the following settings...
00:02:17.118  ==> default:  -- Name:              ubuntu2204-22.04-1711172311-2200_default_1732078350_11ebc11f2ded415a049f
00:02:17.118  ==> default:  -- Domain type:       kvm
00:02:17.118  ==> default:  -- Cpus:              10
00:02:17.118  ==> default:  -- Feature:           acpi
00:02:17.118  ==> default:  -- Feature:           apic
00:02:17.118  ==> default:  -- Feature:           pae
00:02:17.118  ==> default:  -- Memory:            12288M
00:02:17.118  ==> default:  -- Memory Backing:    hugepages: 
00:02:17.118  ==> default:  -- Management MAC:    
00:02:17.118  ==> default:  -- Loader:            
00:02:17.118  ==> default:  -- Nvram:             
00:02:17.118  ==> default:  -- Base box:          spdk/ubuntu2204
00:02:17.118  ==> default:  -- Storage pool:      default
00:02:17.118  ==> default:  -- Image:             /var/lib/libvirt/images/ubuntu2204-22.04-1711172311-2200_default_1732078350_11ebc11f2ded415a049f.img (20G)
00:02:17.118  ==> default:  -- Volume Cache:      default
00:02:17.118  ==> default:  -- Kernel:            
00:02:17.118  ==> default:  -- Initrd:            
00:02:17.118  ==> default:  -- Graphics Type:     vnc
00:02:17.118  ==> default:  -- Graphics Port:     -1
00:02:17.118  ==> default:  -- Graphics IP:       127.0.0.1
00:02:17.118  ==> default:  -- Graphics Password: Not defined
00:02:17.118  ==> default:  -- Video Type:        cirrus
00:02:17.118  ==> default:  -- Video VRAM:        9216
00:02:17.118  ==> default:  -- Sound Type:	
00:02:17.118  ==> default:  -- Keymap:            en-us
00:02:17.118  ==> default:  -- TPM Path:          
00:02:17.118  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:02:17.118  ==> default:  -- Command line args: 
00:02:17.118  ==> default:     -> value=-device, 
00:02:17.118  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:02:17.118  ==> default:     -> value=-drive, 
00:02:17.118  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 
00:02:17.118  ==> default:     -> value=-device, 
00:02:17.118  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:02:17.118  ==> default: Creating shared folders metadata...
00:02:17.118  ==> default: Starting domain.
00:02:19.021  ==> default: Waiting for domain to get an IP address...
00:02:28.995  ==> default: Waiting for SSH to become available...
00:02:30.371  ==> default: Configuring and enabling network interfaces...
00:02:34.560  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:02:39.855  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk
00:02:44.043  ==> default: Mounting SSHFS shared folder...
00:02:44.610  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output => /home/vagrant/spdk_repo/output
00:02:44.610  ==> default: Checking Mount..
00:02:45.546  ==> default: Folder Successfully Mounted!
00:02:45.546  ==> default: Running provisioner: file...
00:02:45.804      default: ~/.gitconfig => .gitconfig
00:02:46.062  
00:02:46.062    SUCCESS!
00:02:46.062  
00:02:46.062    cd to /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt and type "vagrant ssh" to use.
00:02:46.062    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:02:46.062    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt" to destroy all trace of vm.
00:02:46.062  
00:02:46.070  [Pipeline] }
00:02:46.085  [Pipeline] // stage
00:02:46.094  [Pipeline] dir
00:02:46.094  Running in /var/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt
00:02:46.096  [Pipeline] {
00:02:46.108  [Pipeline] catchError
00:02:46.109  [Pipeline] {
00:02:46.121  [Pipeline] sh
00:02:46.399  + vagrant ssh-config --host vagrant
00:02:46.399  + sed -ne /^Host/,$p
00:02:46.399  + tee ssh_conf
00:02:49.687  Host vagrant
00:02:49.687    HostName 192.168.121.103
00:02:49.687    User vagrant
00:02:49.687    Port 22
00:02:49.687    UserKnownHostsFile /dev/null
00:02:49.687    StrictHostKeyChecking no
00:02:49.687    PasswordAuthentication no
00:02:49.687    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-ubuntu2204/22.04-1711172311-2200/libvirt/ubuntu2204
00:02:49.687    IdentitiesOnly yes
00:02:49.687    LogLevel FATAL
00:02:49.687    ForwardAgent yes
00:02:49.687    ForwardX11 yes
00:02:49.687  
00:02:49.703  [Pipeline] withEnv
00:02:49.706  [Pipeline] {
00:02:49.722  [Pipeline] sh
00:02:50.003  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash
00:02:50.003  		source /etc/os-release
00:02:50.003  		[[ -e /image.version ]] && img=$(< /image.version)
00:02:50.003  		# Minimal, systemd-like check.
00:02:50.003  		if [[ -e /.dockerenv ]]; then
00:02:50.003  			# Clear garbage from the node's name:
00:02:50.003  			#  agt-er_autotest_547-896 -> autotest_547-896
00:02:50.003  			#  $HOSTNAME is the actual container id
00:02:50.003  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:02:50.003  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:02:50.003  				# We can assume this is a mount from a host where container is running,
00:02:50.003  				# so fetch its hostname to easily identify the target swarm worker.
00:02:50.003  				container="$(< /etc/hostname) ($agent)"
00:02:50.003  			else
00:02:50.003  				# Fallback
00:02:50.003  				container=$agent
00:02:50.003  			fi
00:02:50.003  		fi
00:02:50.003  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:02:50.003  
00:02:50.274  [Pipeline] }
00:02:50.293  [Pipeline] // withEnv
00:02:50.302  [Pipeline] setCustomBuildProperty
00:02:50.318  [Pipeline] stage
00:02:50.320  [Pipeline] { (Tests)
00:02:50.341  [Pipeline] sh
00:02:50.627  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:02:50.903  [Pipeline] sh
00:02:51.189  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:02:51.464  [Pipeline] timeout
00:02:51.464  Timeout set to expire in 1 hr 30 min
00:02:51.466  [Pipeline] {
00:02:51.480  [Pipeline] sh
00:02:51.761  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard
00:02:52.337  HEAD is now at f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb
00:02:52.370  [Pipeline] sh
00:02:52.665  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo
00:02:52.937  [Pipeline] sh
00:02:53.218  + scp -F ssh_conf -r /var/jenkins/workspace/ubuntu22-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:02:53.495  [Pipeline] sh
00:02:53.776  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=ubuntu22-vg-autotest ./autoruner.sh spdk_repo
00:02:54.035  ++ readlink -f spdk_repo
00:02:54.035  + DIR_ROOT=/home/vagrant/spdk_repo
00:02:54.035  + [[ -n /home/vagrant/spdk_repo ]]
00:02:54.035  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:02:54.035  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:02:54.035  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:02:54.035  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:02:54.035  + [[ -d /home/vagrant/spdk_repo/output ]]
00:02:54.036  + [[ ubuntu22-vg-autotest == pkgdep-* ]]
00:02:54.036  + cd /home/vagrant/spdk_repo
00:02:54.036  + source /etc/os-release
00:02:54.036  ++ PRETTY_NAME='Ubuntu 22.04.4 LTS'
00:02:54.036  ++ NAME=Ubuntu
00:02:54.036  ++ VERSION_ID=22.04
00:02:54.036  ++ VERSION='22.04.4 LTS (Jammy Jellyfish)'
00:02:54.036  ++ VERSION_CODENAME=jammy
00:02:54.036  ++ ID=ubuntu
00:02:54.036  ++ ID_LIKE=debian
00:02:54.036  ++ HOME_URL=https://www.ubuntu.com/
00:02:54.036  ++ SUPPORT_URL=https://help.ubuntu.com/
00:02:54.036  ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
00:02:54.036  ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
00:02:54.036  ++ UBUNTU_CODENAME=jammy
00:02:54.036  + uname -a
00:02:54.036  Linux ubuntu2204-cloud-1711172311-2200 5.15.0-101-generic #111-Ubuntu SMP Tue Mar 5 20:16:58 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
00:02:54.036  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:02:54.294  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:02:54.294  Hugepages
00:02:54.294  node     hugesize     free /  total
00:02:54.294  node0   1048576kB        0 /      0
00:02:54.294  node0      2048kB        0 /      0
00:02:54.294  
00:02:54.294  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:02:54.294  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:02:54.294  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:02:54.294  + rm -f /tmp/spdk-ld-path
00:02:54.294  + source autorun-spdk.conf
00:02:54.294  ++ SPDK_TEST_UNITTEST=1
00:02:54.294  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:54.294  ++ SPDK_TEST_NVME=1
00:02:54.294  ++ SPDK_TEST_BLOCKDEV=1
00:02:54.294  ++ SPDK_RUN_ASAN=1
00:02:54.295  ++ SPDK_RUN_UBSAN=1
00:02:54.295  ++ SPDK_TEST_NATIVE_DPDK=main
00:02:54.295  ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:54.295  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:54.295  ++ RUN_NIGHTLY=1
00:02:54.295  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:54.295  + [[ -n '' ]]
00:02:54.295  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:02:54.295  + for M in /var/spdk/build-*-manifest.txt
00:02:54.295  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:54.295  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:02:54.295  + for M in /var/spdk/build-*-manifest.txt
00:02:54.295  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:54.295  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:02:54.295  ++ uname
00:02:54.295  + [[ Linux == \L\i\n\u\x ]]
00:02:54.295  + sudo dmesg -T
00:02:54.554  + sudo dmesg --clear
00:02:54.554  + dmesg_pid=2288
00:02:54.554  + [[ Ubuntu == FreeBSD ]]
00:02:54.554  + sudo dmesg -Tw
00:02:54.554  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:54.554  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:54.554  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:54.554  + [[ -x /usr/src/fio-static/fio ]]
00:02:54.554  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:54.554  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:54.554  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:54.554  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:02:54.554  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:02:54.554  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:02:54.554  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:54.554  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:54.554    04:53:08  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:54.554   04:53:08  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_TEST_UNITTEST=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_BLOCKDEV=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_NATIVE_DPDK=main
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:54.554    04:53:08  -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1
00:02:54.554   04:53:08  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:54.554   04:53:08  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:54.554     04:53:08  -- common/autotest_common.sh@1692 -- $ [[ n == y ]]
00:02:54.554    04:53:08  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:54.554     04:53:08  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:54.554     04:53:08  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:54.554     04:53:08  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:54.554     04:53:08  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:54.554      04:53:08  -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:54.555      04:53:08  -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:54.555      04:53:08  -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:54.555      04:53:08  -- paths/export.sh@5 -- $ export PATH
00:02:54.555      04:53:08  -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:02:54.555    04:53:08  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:02:54.555      04:53:08  -- common/autobuild_common.sh@493 -- $ date +%s
00:02:54.555     04:53:08  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732078388.XXXXXX
00:02:54.555    04:53:08  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732078388.1saa6A
00:02:54.555    04:53:08  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:02:54.555    04:53:08  -- common/autobuild_common.sh@499 -- $ '[' -n main ']'
00:02:54.555     04:53:08  -- common/autobuild_common.sh@500 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:02:54.555    04:53:08  -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk'
00:02:54.555    04:53:08  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:02:54.555    04:53:08  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp  --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:02:54.555     04:53:08  -- common/autobuild_common.sh@509 -- $ get_config_params
00:02:54.555     04:53:08  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:54.555     04:53:08  -- common/autotest_common.sh@10 -- $ set +x
00:02:54.555    04:53:08  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build'
00:02:54.555    04:53:08  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:02:54.555    04:53:08  -- pm/common@17 -- $ local monitor
00:02:54.555    04:53:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:54.555    04:53:08  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:54.555    04:53:08  -- pm/common@25 -- $ sleep 1
00:02:54.555     04:53:08  -- pm/common@21 -- $ date +%s
00:02:54.555     04:53:08  -- pm/common@21 -- $ date +%s
00:02:54.555    04:53:08  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732078388
00:02:54.555    04:53:08  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732078388
00:02:54.555  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732078388_collect-cpu-load.pm.log
00:02:54.555  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732078388_collect-vmstat.pm.log
00:02:55.491    04:53:09  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:02:55.491   04:53:09  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:55.491   04:53:09  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:55.491   04:53:09  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:55.491   04:53:09  -- spdk/autobuild.sh@16 -- $ date -u
00:02:55.750  Wed Nov 20 04:53:09 UTC 2024
00:02:55.750   04:53:09  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:55.750  v25.01-pre-199-gf22e807f1
00:02:55.750   04:53:09  -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']'
00:02:55.750   04:53:09  -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan'
00:02:55.750   04:53:09  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:55.750   04:53:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:55.750   04:53:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:55.750  ************************************
00:02:55.750  START TEST asan
00:02:55.750  ************************************
00:02:55.750  using asan
00:02:55.750   04:53:09 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan'
00:02:55.750  
00:02:55.750  real	0m0.000s
00:02:55.750  user	0m0.000s
00:02:55.750  sys	0m0.000s
00:02:55.750   04:53:09 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:55.750   04:53:09 asan -- common/autotest_common.sh@10 -- $ set +x
00:02:55.750  ************************************
00:02:55.750  END TEST asan
00:02:55.750  ************************************
00:02:55.750   04:53:09  -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']'
00:02:55.750   04:53:09  -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan'
00:02:55.750   04:53:09  -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']'
00:02:55.750   04:53:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:55.750   04:53:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:55.750  ************************************
00:02:55.750  START TEST ubsan
00:02:55.750  ************************************
00:02:55.750  using ubsan
00:02:55.750   04:53:09 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan'
00:02:55.750  
00:02:55.750  real	0m0.000s
00:02:55.750  user	0m0.000s
00:02:55.750  sys	0m0.000s
00:02:55.750   04:53:09 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:02:55.750   04:53:09 ubsan -- common/autotest_common.sh@10 -- $ set +x
00:02:55.750  ************************************
00:02:55.750  END TEST ubsan
00:02:55.750  ************************************
00:02:55.750   04:53:09  -- spdk/autobuild.sh@27 -- $ '[' -n main ']'
00:02:55.750   04:53:09  -- spdk/autobuild.sh@28 -- $ build_native_dpdk
00:02:55.751   04:53:09  -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk
00:02:55.751   04:53:09  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:02:55.751   04:53:09  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:55.751   04:53:09  -- common/autotest_common.sh@10 -- $ set +x
00:02:55.751  ************************************
00:02:55.751  START TEST build_native_dpdk
00:02:55.751  ************************************
00:02:55.751   04:53:09 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]]
00:02:55.751    04:53:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=11
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=11
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build
00:02:55.751    04:53:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5
00:02:55.751  0c0cd5ffb0 version: 24.11-rc3
00:02:55.751  8c9a7471a0 dts: add checksum offload test suite
00:02:55.751  bee7cf823c dts: add checksum offload to testpmd shell
00:02:55.751  2eef9a80df dts: add dynamic queue test suite
00:02:55.751  c986c3393e dts: add testpmd port queue modification
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon'
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags=
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=24.11.0-rc3
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 11 -ge 5 ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror'
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 11 -ge 10 ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow'
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm")
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]]
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /home/vagrant/spdk_repo/dpdk
00:02:55.751    04:53:09 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']'
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 24.11.0-rc3 21.11.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 21.11.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1
00:02:55.751  patching file config/rte_config.h
00:02:55.751  Hunk #1 succeeded at 72 (offset 13 lines).
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 24.11.0-rc3 24.07.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 24.11.0-rc3 '<' 24.07.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<'
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@345 -- $ : 1
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]]
00:02:55.751    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 1
00:02:55.751   04:53:09 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 24.11.0-rc3 24.07.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 24.11.0-rc3 '>=' 24.07.0
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-:
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>='
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=4
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v
00:02:55.751   04:53:09 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@348 -- $ : 1
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 ))
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 24
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=24
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]]
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] ))
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v++ ))
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 11
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=11
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 11 =~ ^[0-9]+$ ]]
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 11
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=11
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 07
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@353 -- $ local d=07
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 07 =~ ^[0-9]+$ ]]
00:02:55.752    04:53:09 build_native_dpdk -- scripts/common.sh@355 -- $ echo 7
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=7
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] ))
00:02:55.752   04:53:09 build_native_dpdk -- scripts/common.sh@367 -- $ return 0
00:02:55.752   04:53:09 build_native_dpdk -- common/autobuild_common.sh@187 -- $ patch -p1
00:02:55.752  patching file drivers/bus/pci/linux/pci_uio.c
00:02:55.752   04:53:09 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false
00:02:55.752    04:53:09 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s
00:02:55.752   04:53:09 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']'
00:02:55.752    04:53:09 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm
00:02:55.752   04:53:09 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,
00:03:01.026  The Meson build system
00:03:01.026  Version: 1.4.0
00:03:01.026  Source dir: /home/vagrant/spdk_repo/dpdk
00:03:01.026  Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp
00:03:01.026  Build type: native build
00:03:01.026  Project name: DPDK
00:03:01.026  Project version: 24.11.0-rc3
00:03:01.026  C compiler for the host machine: gcc (gcc 11.4.0 "gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0")
00:03:01.026  C linker for the host machine: gcc ld.bfd 2.38
00:03:01.026  Host machine cpu family: x86_64
00:03:01.026  Host machine cpu: x86_64
00:03:01.026  Message: ## Building in Developer Mode ##
00:03:01.026  Program pkg-config found: YES (/usr/bin/pkg-config)
00:03:01.026  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh)
00:03:01.026  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh)
00:03:01.026  Program python3 (elftools) found: YES (/usr/bin/python3) modules: elftools
00:03:01.026  Program cat found: YES (/usr/bin/cat)
00:03:01.026  config/meson.build:122: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead.
00:03:01.026  Compiler for C supports arguments -march=native: YES 
00:03:01.026  Checking for size of "void *" : 8 
00:03:01.026  Checking for size of "void *" : 8 (cached)
00:03:01.026  Compiler for C supports link arguments -Wl,--undefined-version: NO 
00:03:01.026  Library m found: YES
00:03:01.026  Library numa found: YES
00:03:01.026  Has header "numaif.h" : YES 
00:03:01.026  Library fdt found: NO
00:03:01.026  Library execinfo found: NO
00:03:01.026  Has header "execinfo.h" : YES 
00:03:01.026  Found pkg-config: YES (/usr/bin/pkg-config) 0.29.2
00:03:01.026  Run-time dependency libarchive found: NO (tried pkgconfig)
00:03:01.026  Run-time dependency libbsd found: NO (tried pkgconfig)
00:03:01.026  Run-time dependency jansson found: NO (tried pkgconfig)
00:03:01.026  Run-time dependency openssl found: YES 3.0.2
00:03:01.026  Run-time dependency libpcap found: NO (tried pkgconfig)
00:03:01.026  Library pcap found: NO
00:03:01.026  Compiler for C supports arguments -Wcast-qual: YES 
00:03:01.026  Compiler for C supports arguments -Wdeprecated: YES 
00:03:01.026  Compiler for C supports arguments -Wformat: YES 
00:03:01.026  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:03:01.026  Compiler for C supports arguments -Wformat-security: YES 
00:03:01.026  Compiler for C supports arguments -Wmissing-declarations: YES 
00:03:01.026  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:03:01.026  Compiler for C supports arguments -Wnested-externs: YES 
00:03:01.026  Compiler for C supports arguments -Wold-style-definition: YES 
00:03:01.026  Compiler for C supports arguments -Wpointer-arith: YES 
00:03:01.026  Compiler for C supports arguments -Wsign-compare: YES 
00:03:01.027  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:03:01.027  Compiler for C supports arguments -Wundef: YES 
00:03:01.027  Compiler for C supports arguments -Wwrite-strings: YES 
00:03:01.027  Compiler for C supports arguments -Wno-packed-not-aligned: YES 
00:03:01.027  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:03:01.027  Program objdump found: YES (/usr/bin/objdump)
00:03:01.027  Compiler for C supports arguments -mavx512f -mavx512vl -mavx512dq -mavx512bw: YES 
00:03:01.027  Checking if "AVX512 checking" compiles: YES 
00:03:01.027  Fetching value of define "__AVX512F__" : (undefined) 
00:03:01.027  Fetching value of define "__SSE4_2__" : 1 
00:03:01.027  Fetching value of define "__AES__" : 1 
00:03:01.027  Fetching value of define "__AVX__" : 1 
00:03:01.027  Fetching value of define "__AVX2__" : 1 
00:03:01.027  Fetching value of define "__AVX512BW__" : (undefined) 
00:03:01.027  Fetching value of define "__AVX512CD__" : (undefined) 
00:03:01.027  Fetching value of define "__AVX512DQ__" : (undefined) 
00:03:01.027  Fetching value of define "__AVX512F__" : (undefined) 
00:03:01.027  Fetching value of define "__AVX512VL__" : (undefined) 
00:03:01.027  Fetching value of define "__PCLMUL__" : 1 
00:03:01.027  Fetching value of define "__RDRND__" : 1 
00:03:01.027  Fetching value of define "__RDSEED__" : 1 
00:03:01.027  Fetching value of define "__VPCLMULQDQ__" : (undefined) 
00:03:01.027  Compiler for C supports arguments -Wno-format-truncation: YES 
00:03:01.027  Message: lib/log: Defining dependency "log"
00:03:01.027  Message: lib/kvargs: Defining dependency "kvargs"
00:03:01.027  Message: lib/argparse: Defining dependency "argparse"
00:03:01.027  Message: lib/telemetry: Defining dependency "telemetry"
00:03:01.027  Checking for function "pthread_attr_setaffinity_np" : YES 
00:03:01.027  Checking for function "getentropy" : NO 
00:03:01.027  Message: lib/eal: Defining dependency "eal"
00:03:01.027  Message: lib/ptr_compress: Defining dependency "ptr_compress"
00:03:01.027  Message: lib/ring: Defining dependency "ring"
00:03:01.027  Message: lib/rcu: Defining dependency "rcu"
00:03:01.027  Message: lib/mempool: Defining dependency "mempool"
00:03:01.027  Message: lib/mbuf: Defining dependency "mbuf"
00:03:01.027  Fetching value of define "__PCLMUL__" : 1 (cached)
00:03:01.027  Compiler for C supports arguments -mpclmul: YES 
00:03:01.027  Compiler for C supports arguments -maes: YES 
00:03:01.027  Compiler for C supports arguments -mvpclmulqdq: YES 
00:03:01.027  Message: lib/net: Defining dependency "net"
00:03:01.027  Message: lib/meter: Defining dependency "meter"
00:03:01.027  Message: lib/ethdev: Defining dependency "ethdev"
00:03:01.027  Message: lib/pci: Defining dependency "pci"
00:03:01.027  Message: lib/cmdline: Defining dependency "cmdline"
00:03:01.027  Message: lib/metrics: Defining dependency "metrics"
00:03:01.027  Message: lib/hash: Defining dependency "hash"
00:03:01.027  Message: lib/timer: Defining dependency "timer"
00:03:01.027  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:01.027  Fetching value of define "__AVX512VL__" : (undefined) (cached)
00:03:01.027  Fetching value of define "__AVX512CD__" : (undefined) (cached)
00:03:01.027  Fetching value of define "__AVX512BW__" : (undefined) (cached)
00:03:01.027  Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 
00:03:01.027  Message: lib/acl: Defining dependency "acl"
00:03:01.027  Message: lib/bbdev: Defining dependency "bbdev"
00:03:01.027  Message: lib/bitratestats: Defining dependency "bitratestats"
00:03:01.027  Run-time dependency libelf found: YES 0.186
00:03:01.027  lib/bpf/meson.build:49: WARNING: libpcap is missing, rte_bpf_convert API will be disabled
00:03:01.027  Message: lib/bpf: Defining dependency "bpf"
00:03:01.027  Message: lib/cfgfile: Defining dependency "cfgfile"
00:03:01.027  Message: lib/compressdev: Defining dependency "compressdev"
00:03:01.027  Message: lib/cryptodev: Defining dependency "cryptodev"
00:03:01.027  Message: lib/distributor: Defining dependency "distributor"
00:03:01.027  Message: lib/dmadev: Defining dependency "dmadev"
00:03:01.027  Message: lib/efd: Defining dependency "efd"
00:03:01.027  Message: lib/eventdev: Defining dependency "eventdev"
00:03:01.027  Message: lib/dispatcher: Defining dependency "dispatcher"
00:03:01.027  Message: lib/gpudev: Defining dependency "gpudev"
00:03:01.027  Message: lib/gro: Defining dependency "gro"
00:03:01.027  Message: lib/gso: Defining dependency "gso"
00:03:01.027  Message: lib/ip_frag: Defining dependency "ip_frag"
00:03:01.027  Message: lib/jobstats: Defining dependency "jobstats"
00:03:01.027  Message: lib/latencystats: Defining dependency "latencystats"
00:03:01.027  Message: lib/lpm: Defining dependency "lpm"
00:03:01.027  Fetching value of define "__AVX512F__" : (undefined) (cached)
00:03:01.027  Fetching value of define "__AVX512DQ__" : (undefined) (cached)
00:03:01.027  Fetching value of define "__AVX512IFMA__" : (undefined) 
00:03:01.027  Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 
00:03:01.027  Message: lib/member: Defining dependency "member"
00:03:01.027  Message: lib/pcapng: Defining dependency "pcapng"
00:03:01.027  Message: lib/power: Defining dependency "power"
00:03:01.027  Message: lib/rawdev: Defining dependency "rawdev"
00:03:01.027  Message: lib/regexdev: Defining dependency "regexdev"
00:03:01.027  Message: lib/mldev: Defining dependency "mldev"
00:03:01.027  Message: lib/rib: Defining dependency "rib"
00:03:01.027  Message: lib/reorder: Defining dependency "reorder"
00:03:01.027  Message: lib/sched: Defining dependency "sched"
00:03:01.027  Message: lib/security: Defining dependency "security"
00:03:01.027  Message: lib/stack: Defining dependency "stack"
00:03:01.027  Has header "linux/userfaultfd.h" : YES 
00:03:01.027  Message: lib/vhost: Defining dependency "vhost"
00:03:01.027  Message: lib/ipsec: Defining dependency "ipsec"
00:03:01.027  Message: lib/pdcp: Defining dependency "pdcp"
00:03:01.027  Message: lib/fib: Defining dependency "fib"
00:03:01.027  Message: lib/port: Defining dependency "port"
00:03:01.027  Message: lib/pdump: Defining dependency "pdump"
00:03:01.027  Message: lib/table: Defining dependency "table"
00:03:01.027  Message: lib/pipeline: Defining dependency "pipeline"
00:03:01.027  Message: lib/graph: Defining dependency "graph"
00:03:01.027  Message: lib/node: Defining dependency "node"
00:03:01.027  Compiler for C supports arguments -Wno-format-truncation: YES (cached)
00:03:01.027  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:03:01.027  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:03:01.027  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:03:01.027  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:03:01.027  Compiler for C supports arguments -Wno-sign-compare: YES 
00:03:01.027  Compiler for C supports arguments -Wno-unused-value: YES 
00:03:01.027  Compiler for C supports arguments -Wno-strict-aliasing: YES 
00:03:01.027  Compiler for C supports arguments -Wno-unused-but-set-variable: YES 
00:03:01.027  Compiler for C supports arguments -Wno-unused-parameter: YES 
00:03:01.027  Compiler for C supports arguments -march=skylake-avx512: YES 
00:03:01.027  Message: drivers/net/i40e: Defining dependency "net_i40e"
00:03:01.027  Message: drivers/power/acpi: Defining dependency "power_acpi"
00:03:01.027  Message: drivers/power/amd_pstate: Defining dependency "power_amd_pstate"
00:03:01.027  Message: drivers/power/cppc: Defining dependency "power_cppc"
00:03:01.027  Message: drivers/power/intel_pstate: Defining dependency "power_intel_pstate"
00:03:01.027  Message: drivers/power/intel_uncore: Defining dependency "power_intel_uncore"
00:03:01.027  Message: drivers/power/kvm_vm: Defining dependency "power_kvm_vm"
00:03:01.027  Has header "sys/epoll.h" : YES 
00:03:01.027  Program doxygen found: YES (/usr/bin/doxygen)
00:03:01.027  Configuring doxy-api-html.conf using configuration
00:03:01.027  Configuring doxy-api-man.conf using configuration
00:03:01.027  Program mandb found: YES (/usr/bin/mandb)
00:03:01.027  Program sphinx-build found: NO
00:03:01.027  Program sphinx-build found: NO
00:03:01.027  Configuring rte_build_config.h using configuration
00:03:01.027  Message: 
00:03:01.027  =================
00:03:01.027  Applications Enabled
00:03:01.027  =================
00:03:01.027  
00:03:01.027  apps:
00:03:01.027  	graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 
00:03:01.027  	test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, test-pmd, 
00:03:01.027  	test-regex, test-sad, test-security-perf, 
00:03:01.027  
00:03:01.027  Message: 
00:03:01.027  =================
00:03:01.027  Libraries Enabled
00:03:01.027  =================
00:03:01.027  
00:03:01.027  libs:
00:03:01.027  	log, kvargs, argparse, telemetry, eal, ptr_compress, ring, rcu, 
00:03:01.027  	mempool, mbuf, net, meter, ethdev, pci, cmdline, metrics, 
00:03:01.027  	hash, timer, acl, bbdev, bitratestats, bpf, cfgfile, compressdev, 
00:03:01.027  	cryptodev, distributor, dmadev, efd, eventdev, dispatcher, gpudev, gro, 
00:03:01.027  	gso, ip_frag, jobstats, latencystats, lpm, member, pcapng, power, 
00:03:01.027  	rawdev, regexdev, mldev, rib, reorder, sched, security, stack, 
00:03:01.027  	vhost, ipsec, pdcp, fib, port, pdump, table, pipeline, 
00:03:01.027  	graph, node, 
00:03:01.027  
00:03:01.027  Message: 
00:03:01.027  ===============
00:03:01.027  Drivers Enabled
00:03:01.027  ===============
00:03:01.027  
00:03:01.027  common:
00:03:01.027  	
00:03:01.027  bus:
00:03:01.027  	pci, vdev, 
00:03:01.027  mempool:
00:03:01.027  	ring, 
00:03:01.027  dma:
00:03:01.027  	
00:03:01.027  net:
00:03:01.027  	i40e, 
00:03:01.027  raw:
00:03:01.027  	
00:03:01.027  crypto:
00:03:01.027  	
00:03:01.027  compress:
00:03:01.027  	
00:03:01.027  regex:
00:03:01.027  	
00:03:01.027  ml:
00:03:01.027  	
00:03:01.027  vdpa:
00:03:01.027  	
00:03:01.027  event:
00:03:01.027  	
00:03:01.027  baseband:
00:03:01.027  	
00:03:01.027  gpu:
00:03:01.027  	
00:03:01.027  power:
00:03:01.027  	acpi, amd_pstate, cppc, intel_pstate, intel_uncore, kvm_vm, 
00:03:01.027  
00:03:01.027  Message: 
00:03:01.027  =================
00:03:01.027  Content Skipped
00:03:01.027  =================
00:03:01.027  
00:03:01.027  apps:
00:03:01.027  	dumpcap:	missing dependency, "libpcap"
00:03:01.027  	
00:03:01.027  libs:
00:03:01.027  	
00:03:01.027  drivers:
00:03:01.027  	common/cpt:	not in enabled drivers build config
00:03:01.027  	common/dpaax:	not in enabled drivers build config
00:03:01.027  	common/iavf:	not in enabled drivers build config
00:03:01.027  	common/idpf:	not in enabled drivers build config
00:03:01.027  	common/ionic:	not in enabled drivers build config
00:03:01.028  	common/mvep:	not in enabled drivers build config
00:03:01.028  	common/octeontx:	not in enabled drivers build config
00:03:01.028  	bus/auxiliary:	not in enabled drivers build config
00:03:01.028  	bus/cdx:	not in enabled drivers build config
00:03:01.028  	bus/dpaa:	not in enabled drivers build config
00:03:01.028  	bus/fslmc:	not in enabled drivers build config
00:03:01.028  	bus/ifpga:	not in enabled drivers build config
00:03:01.028  	bus/platform:	not in enabled drivers build config
00:03:01.028  	bus/uacce:	not in enabled drivers build config
00:03:01.028  	bus/vmbus:	not in enabled drivers build config
00:03:01.028  	common/cnxk:	not in enabled drivers build config
00:03:01.028  	common/mlx5:	not in enabled drivers build config
00:03:01.028  	common/nfp:	not in enabled drivers build config
00:03:01.028  	common/nitrox:	not in enabled drivers build config
00:03:01.028  	common/qat:	not in enabled drivers build config
00:03:01.028  	common/sfc_efx:	not in enabled drivers build config
00:03:01.028  	mempool/bucket:	not in enabled drivers build config
00:03:01.028  	mempool/cnxk:	not in enabled drivers build config
00:03:01.028  	mempool/dpaa:	not in enabled drivers build config
00:03:01.028  	mempool/dpaa2:	not in enabled drivers build config
00:03:01.028  	mempool/octeontx:	not in enabled drivers build config
00:03:01.028  	mempool/stack:	not in enabled drivers build config
00:03:01.028  	dma/cnxk:	not in enabled drivers build config
00:03:01.028  	dma/dpaa:	not in enabled drivers build config
00:03:01.028  	dma/dpaa2:	not in enabled drivers build config
00:03:01.028  	dma/hisilicon:	not in enabled drivers build config
00:03:01.028  	dma/idxd:	not in enabled drivers build config
00:03:01.028  	dma/ioat:	not in enabled drivers build config
00:03:01.028  	dma/odm:	not in enabled drivers build config
00:03:01.028  	dma/skeleton:	not in enabled drivers build config
00:03:01.028  	net/af_packet:	not in enabled drivers build config
00:03:01.028  	net/af_xdp:	not in enabled drivers build config
00:03:01.028  	net/ark:	not in enabled drivers build config
00:03:01.028  	net/atlantic:	not in enabled drivers build config
00:03:01.028  	net/avp:	not in enabled drivers build config
00:03:01.028  	net/axgbe:	not in enabled drivers build config
00:03:01.028  	net/bnx2x:	not in enabled drivers build config
00:03:01.028  	net/bnxt:	not in enabled drivers build config
00:03:01.028  	net/bonding:	not in enabled drivers build config
00:03:01.028  	net/cnxk:	not in enabled drivers build config
00:03:01.028  	net/cpfl:	not in enabled drivers build config
00:03:01.028  	net/cxgbe:	not in enabled drivers build config
00:03:01.028  	net/dpaa:	not in enabled drivers build config
00:03:01.028  	net/dpaa2:	not in enabled drivers build config
00:03:01.028  	net/e1000:	not in enabled drivers build config
00:03:01.028  	net/ena:	not in enabled drivers build config
00:03:01.028  	net/enetc:	not in enabled drivers build config
00:03:01.028  	net/enetfec:	not in enabled drivers build config
00:03:01.028  	net/enic:	not in enabled drivers build config
00:03:01.028  	net/failsafe:	not in enabled drivers build config
00:03:01.028  	net/fm10k:	not in enabled drivers build config
00:03:01.028  	net/gve:	not in enabled drivers build config
00:03:01.028  	net/hinic:	not in enabled drivers build config
00:03:01.028  	net/hns3:	not in enabled drivers build config
00:03:01.028  	net/iavf:	not in enabled drivers build config
00:03:01.028  	net/ice:	not in enabled drivers build config
00:03:01.028  	net/idpf:	not in enabled drivers build config
00:03:01.028  	net/igc:	not in enabled drivers build config
00:03:01.028  	net/ionic:	not in enabled drivers build config
00:03:01.028  	net/ipn3ke:	not in enabled drivers build config
00:03:01.028  	net/ixgbe:	not in enabled drivers build config
00:03:01.028  	net/mana:	not in enabled drivers build config
00:03:01.028  	net/memif:	not in enabled drivers build config
00:03:01.028  	net/mlx4:	not in enabled drivers build config
00:03:01.028  	net/mlx5:	not in enabled drivers build config
00:03:01.028  	net/mvneta:	not in enabled drivers build config
00:03:01.028  	net/mvpp2:	not in enabled drivers build config
00:03:01.028  	net/netvsc:	not in enabled drivers build config
00:03:01.028  	net/nfb:	not in enabled drivers build config
00:03:01.028  	net/nfp:	not in enabled drivers build config
00:03:01.028  	net/ngbe:	not in enabled drivers build config
00:03:01.028  	net/ntnic:	not in enabled drivers build config
00:03:01.028  	net/null:	not in enabled drivers build config
00:03:01.028  	net/octeontx:	not in enabled drivers build config
00:03:01.028  	net/octeon_ep:	not in enabled drivers build config
00:03:01.028  	net/pcap:	not in enabled drivers build config
00:03:01.028  	net/pfe:	not in enabled drivers build config
00:03:01.028  	net/qede:	not in enabled drivers build config
00:03:01.028  	net/r8169:	not in enabled drivers build config
00:03:01.028  	net/ring:	not in enabled drivers build config
00:03:01.028  	net/sfc:	not in enabled drivers build config
00:03:01.028  	net/softnic:	not in enabled drivers build config
00:03:01.028  	net/tap:	not in enabled drivers build config
00:03:01.028  	net/thunderx:	not in enabled drivers build config
00:03:01.028  	net/txgbe:	not in enabled drivers build config
00:03:01.028  	net/vdev_netvsc:	not in enabled drivers build config
00:03:01.028  	net/vhost:	not in enabled drivers build config
00:03:01.028  	net/virtio:	not in enabled drivers build config
00:03:01.028  	net/vmxnet3:	not in enabled drivers build config
00:03:01.028  	net/zxdh:	not in enabled drivers build config
00:03:01.028  	raw/cnxk_bphy:	not in enabled drivers build config
00:03:01.028  	raw/cnxk_gpio:	not in enabled drivers build config
00:03:01.028  	raw/cnxk_rvu_lf:	not in enabled drivers build config
00:03:01.028  	raw/dpaa2_cmdif:	not in enabled drivers build config
00:03:01.028  	raw/gdtc:	not in enabled drivers build config
00:03:01.028  	raw/ifpga:	not in enabled drivers build config
00:03:01.028  	raw/ntb:	not in enabled drivers build config
00:03:01.028  	raw/skeleton:	not in enabled drivers build config
00:03:01.028  	crypto/armv8:	not in enabled drivers build config
00:03:01.028  	crypto/bcmfs:	not in enabled drivers build config
00:03:01.028  	crypto/caam_jr:	not in enabled drivers build config
00:03:01.028  	crypto/ccp:	not in enabled drivers build config
00:03:01.028  	crypto/cnxk:	not in enabled drivers build config
00:03:01.028  	crypto/dpaa_sec:	not in enabled drivers build config
00:03:01.028  	crypto/dpaa2_sec:	not in enabled drivers build config
00:03:01.028  	crypto/ionic:	not in enabled drivers build config
00:03:01.028  	crypto/ipsec_mb:	not in enabled drivers build config
00:03:01.028  	crypto/mlx5:	not in enabled drivers build config
00:03:01.028  	crypto/mvsam:	not in enabled drivers build config
00:03:01.028  	crypto/nitrox:	not in enabled drivers build config
00:03:01.028  	crypto/null:	not in enabled drivers build config
00:03:01.028  	crypto/octeontx:	not in enabled drivers build config
00:03:01.028  	crypto/openssl:	not in enabled drivers build config
00:03:01.028  	crypto/scheduler:	not in enabled drivers build config
00:03:01.028  	crypto/uadk:	not in enabled drivers build config
00:03:01.028  	crypto/virtio:	not in enabled drivers build config
00:03:01.028  	compress/isal:	not in enabled drivers build config
00:03:01.028  	compress/mlx5:	not in enabled drivers build config
00:03:01.028  	compress/nitrox:	not in enabled drivers build config
00:03:01.028  	compress/octeontx:	not in enabled drivers build config
00:03:01.028  	compress/uadk:	not in enabled drivers build config
00:03:01.028  	compress/zlib:	not in enabled drivers build config
00:03:01.028  	regex/mlx5:	not in enabled drivers build config
00:03:01.028  	regex/cn9k:	not in enabled drivers build config
00:03:01.028  	ml/cnxk:	not in enabled drivers build config
00:03:01.028  	vdpa/ifc:	not in enabled drivers build config
00:03:01.028  	vdpa/mlx5:	not in enabled drivers build config
00:03:01.028  	vdpa/nfp:	not in enabled drivers build config
00:03:01.028  	vdpa/sfc:	not in enabled drivers build config
00:03:01.028  	event/cnxk:	not in enabled drivers build config
00:03:01.028  	event/dlb2:	not in enabled drivers build config
00:03:01.028  	event/dpaa:	not in enabled drivers build config
00:03:01.028  	event/dpaa2:	not in enabled drivers build config
00:03:01.028  	event/dsw:	not in enabled drivers build config
00:03:01.028  	event/opdl:	not in enabled drivers build config
00:03:01.028  	event/skeleton:	not in enabled drivers build config
00:03:01.028  	event/sw:	not in enabled drivers build config
00:03:01.028  	event/octeontx:	not in enabled drivers build config
00:03:01.028  	baseband/acc:	not in enabled drivers build config
00:03:01.028  	baseband/fpga_5gnr_fec:	not in enabled drivers build config
00:03:01.028  	baseband/fpga_lte_fec:	not in enabled drivers build config
00:03:01.028  	baseband/la12xx:	not in enabled drivers build config
00:03:01.028  	baseband/null:	not in enabled drivers build config
00:03:01.028  	baseband/turbo_sw:	not in enabled drivers build config
00:03:01.028  	gpu/cuda:	not in enabled drivers build config
00:03:01.028  	power/amd_uncore:	not in enabled drivers build config
00:03:01.028  	
00:03:01.028  
00:03:01.028  Message: DPDK build config complete:
00:03:01.028    source path = "/home/vagrant/spdk_repo/dpdk"
00:03:01.028    build path  = "/home/vagrant/spdk_repo/dpdk/build-tmp"
00:03:01.028  Build targets in project: 248
00:03:01.028  
00:03:01.028  DPDK 24.11.0-rc3
00:03:01.028  
00:03:01.028    User defined options
00:03:01.028      libdir        : lib
00:03:01.028      prefix        : /home/vagrant/spdk_repo/dpdk/build
00:03:01.028      c_args        : -fPIC -g -fcommon -Werror -Wno-stringop-overflow
00:03:01.028      c_link_args   : 
00:03:01.028      enable_docs   : false
00:03:01.299      enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm,
00:03:01.299      enable_kmods  : false
00:03:01.299      machine       : native
00:03:01.299      tests         : false
00:03:01.299  
00:03:01.299  Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja
00:03:01.299  WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated.
00:03:01.582   04:53:15 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10
00:03:01.582  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:03:01.582  [1/766] Compiling C object lib/librte_log.a.p/log_log_syslog.c.o
00:03:01.582  [2/766] Compiling C object lib/librte_log.a.p/log_log_timestamp.c.o
00:03:01.582  [3/766] Compiling C object lib/librte_log.a.p/log_log_color.c.o
00:03:01.582  [4/766] Compiling C object lib/librte_log.a.p/log_log_journal.c.o
00:03:01.582  [5/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:03:01.582  [6/766] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:03:01.840  [7/766] Linking static target lib/librte_kvargs.a
00:03:01.840  [8/766] Compiling C object lib/librte_log.a.p/log_log.c.o
00:03:01.840  [9/766] Linking static target lib/librte_log.a
00:03:01.840  [10/766] Compiling C object lib/librte_argparse.a.p/argparse_rte_argparse.c.o
00:03:01.840  [11/766] Linking static target lib/librte_argparse.a
00:03:02.099  [12/766] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.099  [13/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:03:02.099  [14/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:03:02.099  [15/766] Generating lib/argparse.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.099  [16/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:03:02.099  [17/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:03:02.099  [18/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:03:02.099  [19/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:03:02.099  [20/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:03:02.358  [21/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:03:02.358  [22/766] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.358  [23/766] Linking target lib/librte_log.so.25.0
00:03:02.358  [24/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore_var.c.o
00:03:02.358  [25/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:03:02.616  [26/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:03:02.616  [27/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:03:02.616  [28/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:03:02.616  [29/766] Generating symbol file lib/librte_log.so.25.0.p/librte_log.so.25.0.symbols
00:03:02.616  [30/766] Linking target lib/librte_kvargs.so.25.0
00:03:02.616  [31/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:03:02.616  [32/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:03:02.616  [33/766] Linking target lib/librte_argparse.so.25.0
00:03:02.616  [34/766] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:03:02.616  [35/766] Linking static target lib/librte_telemetry.a
00:03:02.616  [36/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:03:02.875  [37/766] Generating symbol file lib/librte_kvargs.so.25.0.p/librte_kvargs.so.25.0.symbols
00:03:02.875  [38/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:03:02.875  [39/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:03:02.875  [40/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:03:03.134  [41/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:03:03.134  [42/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:03:03.134  [43/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:03:03.134  [44/766] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:03:03.134  [45/766] Linking target lib/librte_telemetry.so.25.0
00:03:03.134  [46/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:03:03.134  [47/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:03:03.134  [48/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:03:03.393  [49/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:03:03.393  [50/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:03:03.393  [51/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_bitset.c.o
00:03:03.393  [52/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:03:03.393  [53/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:03:03.393  [54/766] Generating symbol file lib/librte_telemetry.so.25.0.p/librte_telemetry.so.25.0.symbols
00:03:03.393  [55/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:03:03.393  [56/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:03:03.652  [57/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:03:03.652  [58/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:03:03.652  [59/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o
00:03:03.652  [60/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:03:03.912  [61/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:03:03.912  [62/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:03:03.912  [63/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:03:03.912  [64/766] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:03:03.912  [65/766] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:03:03.912  [66/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:03:04.171  [67/766] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:03:04.171  [68/766] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:03:04.171  [69/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:03:04.171  [70/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o
00:03:04.171  [71/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:03:04.171  [72/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:03:04.171  [73/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:03:04.171  [74/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:03:04.171  [75/766] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:03:04.429  [76/766] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:03:04.429  [77/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o
00:03:04.688  [78/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o
00:03:04.688  [79/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o
00:03:04.688  [80/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o
00:03:04.688  [81/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:03:04.688  [82/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o
00:03:04.688  [83/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:03:04.688  [84/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o
00:03:04.688  [85/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:03:04.688  [86/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o
00:03:04.948  [87/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o
00:03:04.948  [88/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o
00:03:04.948  [89/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o
00:03:04.948  [90/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:03:04.948  [91/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o
00:03:04.948  [92/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_mmu.c.o
00:03:05.207  [93/766] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:03:05.207  [94/766] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:03:05.207  [95/766] Linking static target lib/librte_ring.a
00:03:05.466  [96/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:03:05.466  [97/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:03:05.466  [98/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:03:05.466  [99/766] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:03:05.466  [100/766] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:03:05.466  [101/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:03:05.466  [102/766] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o
00:03:05.466  [103/766] Linking static target lib/librte_eal.a
00:03:05.726  [104/766] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:03:05.726  [105/766] Linking static target lib/librte_mempool.a
00:03:05.726  [106/766] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:03:05.726  [107/766] Linking static target lib/librte_rcu.a
00:03:05.726  [108/766] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o
00:03:05.726  [109/766] Linking static target lib/net/libnet_crc_avx512_lib.a
00:03:05.985  [110/766] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:03:05.985  [111/766] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:03:05.985  [112/766] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:03:05.985  [113/766] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:03:05.985  [114/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:03:05.985  [115/766] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.244  [116/766] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:03:06.244  [117/766] Linking static target lib/librte_net.a
00:03:06.244  [118/766] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:03:06.244  [119/766] Linking static target lib/librte_mbuf.a
00:03:06.244  [120/766] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:03:06.244  [121/766] Linking static target lib/librte_meter.a
00:03:06.502  [122/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:03:06.502  [123/766] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.502  [124/766] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.502  [125/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:03:06.502  [126/766] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:03:06.502  [127/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:03:06.502  [128/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:03:06.764  [129/766] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:07.025  [130/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:03:07.025  [131/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:03:07.284  [132/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:03:07.284  [133/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:03:07.543  [134/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:03:07.543  [135/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:03:07.543  [136/766] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:03:07.543  [137/766] Linking static target lib/librte_pci.a
00:03:07.543  [138/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:03:07.543  [139/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:03:07.543  [140/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:03:07.543  [141/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:03:07.802  [142/766] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:07.802  [143/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:03:07.802  [144/766] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o
00:03:07.802  [145/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:03:07.802  [146/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:03:07.802  [147/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:03:07.802  [148/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:03:08.061  [149/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:03:08.061  [150/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:03:08.061  [151/766] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:03:08.061  [152/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:03:08.061  [153/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:03:08.061  [154/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:03:08.061  [155/766] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:03:08.061  [156/766] Linking static target lib/librte_cmdline.a
00:03:08.319  [157/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:03:08.320  [158/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o
00:03:08.320  [159/766] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:03:08.320  [160/766] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o
00:03:08.320  [161/766] Linking static target lib/librte_metrics.a
00:03:08.579  [162/766] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:03:08.579  [163/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:03:08.837  [164/766] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output)
00:03:08.837  [165/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gf2_poly_math.c.o
00:03:08.837  [166/766] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:03:09.096  [167/766] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:09.096  [168/766] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:03:09.096  [169/766] Linking static target lib/librte_timer.a
00:03:09.355  [170/766] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:03:09.355  [171/766] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o
00:03:09.614  [172/766] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o
00:03:09.614  [173/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o
00:03:09.614  [174/766] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o
00:03:09.873  [175/766] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o
00:03:10.132  [176/766] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o
00:03:10.132  [177/766] Linking static target lib/librte_bitratestats.a
00:03:10.132  [178/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o
00:03:10.132  [179/766] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:10.132  [180/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o
00:03:10.390  [181/766] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o
00:03:10.390  [182/766] Linking static target lib/librte_bbdev.a
00:03:10.390  [183/766] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:03:10.390  [184/766] Linking static target lib/librte_hash.a
00:03:10.649  [185/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o
00:03:10.908  [186/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o
00:03:10.908  [187/766] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:03:10.908  [188/766] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o
00:03:10.908  [189/766] Linking static target lib/acl/libavx2_tmp.a
00:03:10.908  [190/766] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:10.908  [191/766] Linking static target lib/librte_ethdev.a
00:03:11.167  [192/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o
00:03:11.167  [193/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o
00:03:11.167  [194/766] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:03:11.167  [195/766] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o
00:03:11.425  [196/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o
00:03:11.425  [197/766] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o
00:03:11.425  [198/766] Linking static target lib/librte_cfgfile.a
00:03:11.425  [199/766] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o
00:03:11.425  [200/766] Linking static target lib/acl/libavx512_tmp.a
00:03:11.425  [201/766] Linking static target lib/librte_acl.a
00:03:11.684  [202/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:03:11.684  [203/766] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output)
00:03:11.684  [204/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:03:11.684  [205/766] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output)
00:03:11.684  [206/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:03:11.684  [207/766] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:03:11.684  [208/766] Linking static target lib/librte_compressdev.a
00:03:11.684  [209/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o
00:03:11.943  [210/766] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o
00:03:11.943  [211/766] Linking static target lib/librte_bpf.a
00:03:12.202  [212/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o
00:03:12.202  [213/766] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output)
00:03:12.202  [214/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:03:12.202  [215/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o
00:03:12.461  [216/766] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:12.461  [217/766] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o
00:03:12.461  [218/766] Linking static target lib/librte_distributor.a
00:03:12.461  [219/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:03:12.461  [220/766] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:03:12.461  [221/766] Linking static target lib/librte_dmadev.a
00:03:12.719  [222/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o
00:03:12.719  [223/766] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output)
00:03:12.978  [224/766] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:12.978  [225/766] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o
00:03:12.978  [226/766] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:12.978  [227/766] Linking target lib/librte_eal.so.25.0
00:03:13.237  [228/766] Generating symbol file lib/librte_eal.so.25.0.p/librte_eal.so.25.0.symbols
00:03:13.237  [229/766] Linking target lib/librte_ring.so.25.0
00:03:13.237  [230/766] Generating symbol file lib/librte_ring.so.25.0.p/librte_ring.so.25.0.symbols
00:03:13.496  [231/766] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o
00:03:13.496  [232/766] Linking target lib/librte_rcu.so.25.0
00:03:13.496  [233/766] Linking target lib/librte_mempool.so.25.0
00:03:13.496  [234/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o
00:03:13.496  [235/766] Linking target lib/librte_meter.so.25.0
00:03:13.496  [236/766] Generating symbol file lib/librte_rcu.so.25.0.p/librte_rcu.so.25.0.symbols
00:03:13.496  [237/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o
00:03:13.496  [238/766] Generating symbol file lib/librte_mempool.so.25.0.p/librte_mempool.so.25.0.symbols
00:03:13.496  [239/766] Linking target lib/librte_pci.so.25.0
00:03:13.496  [240/766] Linking target lib/librte_timer.so.25.0
00:03:13.496  [241/766] Linking target lib/librte_mbuf.so.25.0
00:03:13.496  [242/766] Generating symbol file lib/librte_meter.so.25.0.p/librte_meter.so.25.0.symbols
00:03:13.754  [243/766] Linking target lib/librte_acl.so.25.0
00:03:13.754  [244/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o
00:03:13.754  [245/766] Generating symbol file lib/librte_pci.so.25.0.p/librte_pci.so.25.0.symbols
00:03:13.754  [246/766] Generating symbol file lib/librte_timer.so.25.0.p/librte_timer.so.25.0.symbols
00:03:13.754  [247/766] Generating symbol file lib/librte_mbuf.so.25.0.p/librte_mbuf.so.25.0.symbols
00:03:13.754  [248/766] Linking target lib/librte_cfgfile.so.25.0
00:03:13.754  [249/766] Linking static target lib/librte_efd.a
00:03:13.754  [250/766] Linking target lib/librte_dmadev.so.25.0
00:03:13.754  [251/766] Linking target lib/librte_net.so.25.0
00:03:13.754  [252/766] Linking target lib/librte_bbdev.so.25.0
00:03:13.754  [253/766] Linking target lib/librte_compressdev.so.25.0
00:03:13.754  [254/766] Linking target lib/librte_distributor.so.25.0
00:03:13.754  [255/766] Generating symbol file lib/librte_acl.so.25.0.p/librte_acl.so.25.0.symbols
00:03:13.754  [256/766] Generating symbol file lib/librte_dmadev.so.25.0.p/librte_dmadev.so.25.0.symbols
00:03:13.754  [257/766] Generating symbol file lib/librte_net.so.25.0.p/librte_net.so.25.0.symbols
00:03:13.754  [258/766] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:03:14.024  [259/766] Linking target lib/librte_cmdline.so.25.0
00:03:14.024  [260/766] Linking target lib/librte_hash.so.25.0
00:03:14.024  [261/766] Linking static target lib/librte_cryptodev.a
00:03:14.024  [262/766] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output)
00:03:14.024  [263/766] Generating symbol file lib/librte_hash.so.25.0.p/librte_hash.so.25.0.symbols
00:03:14.024  [264/766] Linking target lib/librte_efd.so.25.0
00:03:14.024  [265/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o
00:03:14.304  [266/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o
00:03:14.304  [267/766] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o
00:03:14.304  [268/766] Linking static target lib/librte_dispatcher.a
00:03:14.562  [269/766] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o
00:03:14.562  [270/766] Linking static target lib/librte_gpudev.a
00:03:14.562  [271/766] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o
00:03:14.562  [272/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o
00:03:14.562  [273/766] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o
00:03:14.821  [274/766] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output)
00:03:14.821  [275/766] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o
00:03:15.388  [276/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o
00:03:15.388  [277/766] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o
00:03:15.388  [278/766] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:15.388  [279/766] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o
00:03:15.388  [280/766] Linking static target lib/librte_gro.a
00:03:15.388  [281/766] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o
00:03:15.388  [282/766] Linking target lib/librte_gpudev.so.25.0
00:03:15.388  [283/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o
00:03:15.388  [284/766] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o
00:03:15.388  [285/766] Linking static target lib/librte_eventdev.a
00:03:15.388  [286/766] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o
00:03:15.388  [287/766] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:15.388  [288/766] Linking target lib/librte_cryptodev.so.25.0
00:03:15.388  [289/766] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output)
00:03:15.646  [290/766] Generating symbol file lib/librte_cryptodev.so.25.0.p/librte_cryptodev.so.25.0.symbols
00:03:15.646  [291/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o
00:03:15.646  [292/766] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o
00:03:15.646  [293/766] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o
00:03:15.646  [294/766] Linking static target lib/librte_gso.a
00:03:15.904  [295/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o
00:03:15.904  [296/766] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output)
00:03:15.904  [297/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o
00:03:15.904  [298/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o
00:03:15.904  [299/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o
00:03:16.163  [300/766] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o
00:03:16.163  [301/766] Linking static target lib/librte_jobstats.a
00:03:16.163  [302/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o
00:03:16.163  [303/766] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o
00:03:16.422  [304/766] Linking static target lib/librte_ip_frag.a
00:03:16.422  [305/766] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.422  [306/766] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o
00:03:16.422  [307/766] Linking static target lib/member/libsketch_avx512_tmp.a
00:03:16.422  [308/766] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o
00:03:16.422  [309/766] Linking static target lib/librte_latencystats.a
00:03:16.422  [310/766] Linking target lib/librte_jobstats.so.25.0
00:03:16.422  [311/766] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.422  [312/766] Compiling C object lib/librte_member.a.p/member_rte_member.c.o
00:03:16.422  [313/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o
00:03:16.422  [314/766] Linking target lib/librte_ethdev.so.25.0
00:03:16.681  [315/766] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.681  [316/766] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output)
00:03:16.681  [317/766] Compiling C object lib/librte_power.a.p/power_power_common.c.o
00:03:16.681  [318/766] Generating symbol file lib/librte_ethdev.so.25.0.p/librte_ethdev.so.25.0.symbols
00:03:16.681  [319/766] Linking target lib/librte_metrics.so.25.0
00:03:16.681  [320/766] Compiling C object lib/librte_power.a.p/power_rte_power_qos.c.o
00:03:16.681  [321/766] Linking target lib/librte_bpf.so.25.0
00:03:16.939  [322/766] Generating symbol file lib/librte_metrics.so.25.0.p/librte_metrics.so.25.0.symbols
00:03:16.939  [323/766] Linking target lib/librte_bitratestats.so.25.0
00:03:16.939  [324/766] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o
00:03:16.939  [325/766] Generating symbol file lib/librte_bpf.so.25.0.p/librte_bpf.so.25.0.symbols
00:03:16.939  [326/766] Linking target lib/librte_gro.so.25.0
00:03:16.939  [327/766] Linking target lib/librte_gso.so.25.0
00:03:16.939  [328/766] Linking static target lib/librte_lpm.a
00:03:16.939  [329/766] Linking target lib/librte_ip_frag.so.25.0
00:03:16.939  [330/766] Linking target lib/librte_latencystats.so.25.0
00:03:16.939  [331/766] Compiling C object lib/librte_power.a.p/power_rte_power_cpufreq.c.o
00:03:16.939  [332/766] Generating symbol file lib/librte_ip_frag.so.25.0.p/librte_ip_frag.so.25.0.symbols
00:03:17.198  [333/766] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o
00:03:17.198  [334/766] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o
00:03:17.198  [335/766] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o
00:03:17.198  [336/766] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output)
00:03:17.198  [337/766] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o
00:03:17.198  [338/766] Linking target lib/librte_lpm.so.25.0
00:03:17.198  [339/766] Linking static target lib/librte_pcapng.a
00:03:17.456  [340/766] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o
00:03:17.456  [341/766] Linking static target lib/librte_power.a
00:03:17.456  [342/766] Generating symbol file lib/librte_lpm.so.25.0.p/librte_lpm.so.25.0.symbols
00:03:17.457  [343/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o
00:03:17.457  [344/766] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output)
00:03:17.457  [345/766] Linking target lib/librte_pcapng.so.25.0
00:03:17.716  [346/766] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o
00:03:17.716  [347/766] Linking static target lib/librte_rawdev.a
00:03:17.716  [348/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o
00:03:17.716  [349/766] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o
00:03:17.716  [350/766] Linking static target lib/librte_regexdev.a
00:03:17.716  [351/766] Generating symbol file lib/librte_pcapng.so.25.0.p/librte_pcapng.so.25.0.symbols
00:03:17.716  [352/766] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o
00:03:17.975  [353/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o
00:03:17.975  [354/766] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:17.975  [355/766] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o
00:03:17.975  [356/766] Linking static target lib/librte_mldev.a
00:03:17.975  [357/766] Linking target lib/librte_rawdev.so.25.0
00:03:18.233  [358/766] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o
00:03:18.233  [359/766] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:18.233  [360/766] Linking static target lib/librte_member.a
00:03:18.233  [361/766] Linking target lib/librte_eventdev.so.25.0
00:03:18.233  [362/766] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output)
00:03:18.233  [363/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o
00:03:18.233  [364/766] Linking target lib/librte_power.so.25.0
00:03:18.233  [365/766] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:18.233  [366/766] Linking target lib/librte_regexdev.so.25.0
00:03:18.233  [367/766] Generating symbol file lib/librte_eventdev.so.25.0.p/librte_eventdev.so.25.0.symbols
00:03:18.233  [368/766] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o
00:03:18.233  [369/766] Linking target lib/librte_dispatcher.so.25.0
00:03:18.492  [370/766] Generating symbol file lib/librte_power.so.25.0.p/librte_power.so.25.0.symbols
00:03:18.492  [371/766] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o
00:03:18.492  [372/766] Linking static target lib/librte_rib.a
00:03:18.492  [373/766] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:03:18.492  [374/766] Linking static target lib/librte_reorder.a
00:03:18.492  [375/766] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o
00:03:18.492  [376/766] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output)
00:03:18.492  [377/766] Linking target lib/librte_member.so.25.0
00:03:18.750  [378/766] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o
00:03:18.750  [379/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o
00:03:18.750  [380/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o
00:03:18.750  [381/766] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:03:18.750  [382/766] Linking target lib/librte_reorder.so.25.0
00:03:18.750  [383/766] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o
00:03:18.750  [384/766] Linking static target lib/librte_stack.a
00:03:19.009  [385/766] Generating symbol file lib/librte_reorder.so.25.0.p/librte_reorder.so.25.0.symbols
00:03:19.009  [386/766] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:19.009  [387/766] Linking target lib/librte_rib.so.25.0
00:03:19.009  [388/766] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:03:19.009  [389/766] Linking static target lib/librte_security.a
00:03:19.009  [390/766] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output)
00:03:19.009  [391/766] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o
00:03:19.009  [392/766] Generating symbol file lib/librte_rib.so.25.0.p/librte_rib.so.25.0.symbols
00:03:19.009  [393/766] Linking target lib/librte_stack.so.25.0
00:03:19.268  [394/766] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o
00:03:19.268  [395/766] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o
00:03:19.526  [396/766] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:19.526  [397/766] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:03:19.526  [398/766] Linking target lib/librte_mldev.so.25.0
00:03:19.526  [399/766] Linking target lib/librte_security.so.25.0
00:03:19.526  [400/766] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o
00:03:19.526  [401/766] Generating symbol file lib/librte_security.so.25.0.p/librte_security.so.25.0.symbols
00:03:19.526  [402/766] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o
00:03:19.526  [403/766] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o
00:03:19.526  [404/766] Linking static target lib/librte_sched.a
00:03:19.785  [405/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o
00:03:20.043  [406/766] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output)
00:03:20.043  [407/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o
00:03:20.043  [408/766] Linking target lib/librte_sched.so.25.0
00:03:20.302  [409/766] Generating symbol file lib/librte_sched.so.25.0.p/librte_sched.so.25.0.symbols
00:03:20.302  [410/766] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o
00:03:20.302  [411/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o
00:03:20.560  [412/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o
00:03:20.560  [413/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o
00:03:20.560  [414/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o
00:03:20.819  [415/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o
00:03:20.819  [416/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o
00:03:20.819  [417/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o
00:03:21.077  [418/766] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o
00:03:21.077  [419/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o
00:03:21.336  [420/766] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o
00:03:21.336  [421/766] Linking static target lib/fib/libtrie_avx512_tmp.a
00:03:21.336  [422/766] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o
00:03:21.336  [423/766] Compiling C object lib/librte_port.a.p/port_port_log.c.o
00:03:21.336  [424/766] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o
00:03:21.336  [425/766] Linking static target lib/fib/libdir24_8_avx512_tmp.a
00:03:21.336  [426/766] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o
00:03:21.336  [427/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o
00:03:21.336  [428/766] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o
00:03:21.336  [429/766] Linking static target lib/librte_ipsec.a
00:03:21.902  [430/766] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output)
00:03:21.902  [431/766] Linking target lib/librte_ipsec.so.25.0
00:03:21.902  [432/766] Generating symbol file lib/librte_ipsec.so.25.0.p/librte_ipsec.so.25.0.symbols
00:03:22.161  [433/766] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o
00:03:22.161  [434/766] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o
00:03:22.161  [435/766] Compiling C object lib/librte_fib.a.p/fib_trie.c.o
00:03:22.161  [436/766] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o
00:03:22.161  [437/766] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o
00:03:22.161  [438/766] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o
00:03:22.161  [439/766] Linking static target lib/librte_pdcp.a
00:03:22.420  [440/766] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o
00:03:22.420  [441/766] Linking static target lib/librte_fib.a
00:03:22.420  [442/766] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.678  [443/766] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o
00:03:22.678  [444/766] Linking target lib/librte_pdcp.so.25.0
00:03:22.678  [445/766] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o
00:03:22.937  [446/766] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output)
00:03:22.937  [447/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o
00:03:22.937  [448/766] Linking target lib/librte_fib.so.25.0
00:03:22.937  [449/766] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o
00:03:22.937  [450/766] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o
00:03:23.195  [451/766] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o
00:03:23.195  [452/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o
00:03:23.454  [453/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o
00:03:23.454  [454/766] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o
00:03:23.454  [455/766] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o
00:03:23.454  [456/766] Linking static target lib/librte_port.a
00:03:23.454  [457/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o
00:03:23.454  [458/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o
00:03:23.713  [459/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o
00:03:23.713  [460/766] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o
00:03:23.971  [461/766] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o
00:03:23.971  [462/766] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o
00:03:23.971  [463/766] Linking static target lib/librte_pdump.a
00:03:23.971  [464/766] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o
00:03:23.971  [465/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o
00:03:23.971  [466/766] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output)
00:03:24.230  [467/766] Linking target lib/librte_port.so.25.0
00:03:24.230  [468/766] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output)
00:03:24.230  [469/766] Generating symbol file lib/librte_port.so.25.0.p/librte_port.so.25.0.symbols
00:03:24.230  [470/766] Linking target lib/librte_pdump.so.25.0
00:03:24.489  [471/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o
00:03:24.489  [472/766] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o
00:03:24.489  [473/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o
00:03:24.489  [474/766] Compiling C object lib/librte_table.a.p/table_table_log.c.o
00:03:24.489  [475/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o
00:03:24.748  [476/766] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o
00:03:24.748  [477/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o
00:03:24.748  [478/766] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o
00:03:25.006  [479/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o
00:03:25.006  [480/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o
00:03:25.006  [481/766] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o
00:03:25.006  [482/766] Linking static target lib/librte_table.a
00:03:25.265  [483/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o
00:03:25.522  [484/766] Compiling C object lib/librte_graph.a.p/graph_node.c.o
00:03:25.522  [485/766] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output)
00:03:25.782  [486/766] Compiling C object lib/librte_graph.a.p/graph_graph.c.o
00:03:25.782  [487/766] Linking target lib/librte_table.so.25.0
00:03:25.782  [488/766] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o
00:03:25.782  [489/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o
00:03:25.782  [490/766] Generating symbol file lib/librte_table.so.25.0.p/librte_table.so.25.0.symbols
00:03:26.041  [491/766] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o
00:03:26.299  [492/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o
00:03:26.299  [493/766] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o
00:03:26.299  [494/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o
00:03:26.299  [495/766] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o
00:03:26.299  [496/766] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o
00:03:26.558  [497/766] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o
00:03:26.817  [498/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o
00:03:26.817  [499/766] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o
00:03:26.817  [500/766] Linking static target lib/librte_graph.a
00:03:26.817  [501/766] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o
00:03:26.817  [502/766] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o
00:03:26.817  [503/766] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o
00:03:27.384  [504/766] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o
00:03:27.384  [505/766] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o
00:03:27.384  [506/766] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output)
00:03:27.643  [507/766] Linking target lib/librte_graph.so.25.0
00:03:27.643  [508/766] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o
00:03:27.643  [509/766] Compiling C object lib/librte_node.a.p/node_null.c.o
00:03:27.643  [510/766] Generating symbol file lib/librte_graph.so.25.0.p/librte_graph.so.25.0.symbols
00:03:27.643  [511/766] Compiling C object lib/librte_node.a.p/node_log.c.o
00:03:27.643  [512/766] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o
00:03:27.643  [513/766] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o
00:03:27.902  [514/766] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o
00:03:27.902  [515/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:03:27.902  [516/766] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o
00:03:27.902  [517/766] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o
00:03:28.160  [518/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:03:28.419  [519/766] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o
00:03:28.419  [520/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:03:28.419  [521/766] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o
00:03:28.419  [522/766] Linking static target lib/librte_node.a
00:03:28.419  [523/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o
00:03:28.419  [524/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:03:28.419  [525/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o
00:03:28.677  [526/766] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output)
00:03:28.677  [527/766] Linking target lib/librte_node.so.25.0
00:03:28.677  [528/766] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o
00:03:28.677  [529/766] Linking static target drivers/libtmp_rte_bus_pci.a
00:03:28.677  [530/766] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:03:28.677  [531/766] Linking static target drivers/libtmp_rte_bus_vdev.a
00:03:28.936  [532/766] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:03:28.936  [533/766] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:28.936  [534/766] Linking static target drivers/librte_bus_vdev.a
00:03:28.936  [535/766] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:03:28.936  [536/766] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:28.936  [537/766] Linking static target drivers/librte_bus_pci.a
00:03:28.936  [538/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o
00:03:29.195  [539/766] Compiling C object drivers/librte_bus_pci.so.25.0.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:03:29.195  [540/766] Compiling C object drivers/librte_bus_vdev.so.25.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:03:29.195  [541/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o
00:03:29.195  [542/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o
00:03:29.195  [543/766] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.195  [544/766] Linking target drivers/librte_bus_vdev.so.25.0
00:03:29.195  [545/766] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:03:29.195  [546/766] Linking static target drivers/libtmp_rte_mempool_ring.a
00:03:29.454  [547/766] Generating symbol file drivers/librte_bus_vdev.so.25.0.p/librte_bus_vdev.so.25.0.symbols
00:03:29.454  [548/766] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:03:29.454  [549/766] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:29.454  [550/766] Linking static target drivers/librte_mempool_ring.a
00:03:29.454  [551/766] Compiling C object drivers/librte_mempool_ring.so.25.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:03:29.454  [552/766] Linking target drivers/librte_mempool_ring.so.25.0
00:03:29.454  [553/766] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:03:29.454  [554/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o
00:03:29.712  [555/766] Linking target drivers/librte_bus_pci.so.25.0
00:03:29.712  [556/766] Generating symbol file drivers/librte_bus_pci.so.25.0.p/librte_bus_pci.so.25.0.symbols
00:03:29.971  [557/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o
00:03:30.229  [558/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o
00:03:30.229  [559/766] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o
00:03:30.229  [560/766] Linking static target drivers/net/i40e/base/libi40e_base.a
00:03:30.797  [561/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o
00:03:31.055  [562/766] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o
00:03:31.055  [563/766] Linking static target drivers/net/i40e/libi40e_avx512_lib.a
00:03:31.055  [564/766] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o
00:03:31.055  [565/766] Linking static target drivers/net/i40e/libi40e_avx2_lib.a
00:03:31.314  [566/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o
00:03:31.572  [567/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o
00:03:31.572  [568/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o
00:03:31.572  [569/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o
00:03:31.831  [570/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o
00:03:31.831  [571/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o
00:03:31.831  [572/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o
00:03:32.090  [573/766] Compiling C object drivers/libtmp_rte_power_acpi.a.p/power_acpi_acpi_cpufreq.c.o
00:03:32.090  [574/766] Linking static target drivers/libtmp_rte_power_acpi.a
00:03:32.348  [575/766] Generating drivers/rte_power_acpi.pmd.c with a custom command
00:03:32.348  [576/766] Compiling C object drivers/librte_power_acpi.a.p/meson-generated_.._rte_power_acpi.pmd.c.o
00:03:32.348  [577/766] Linking static target drivers/librte_power_acpi.a
00:03:32.348  [578/766] Compiling C object drivers/librte_power_acpi.so.25.0.p/meson-generated_.._rte_power_acpi.pmd.c.o
00:03:32.348  [579/766] Compiling C object drivers/libtmp_rte_power_amd_pstate.a.p/power_amd_pstate_amd_pstate_cpufreq.c.o
00:03:32.348  [580/766] Linking static target drivers/libtmp_rte_power_amd_pstate.a
00:03:32.348  [581/766] Linking target drivers/librte_power_acpi.so.25.0
00:03:32.608  [582/766] Compiling C object drivers/libtmp_rte_power_cppc.a.p/power_cppc_cppc_cpufreq.c.o
00:03:32.608  [583/766] Linking static target drivers/libtmp_rte_power_cppc.a
00:03:32.608  [584/766] Generating drivers/rte_power_amd_pstate.pmd.c with a custom command
00:03:32.608  [585/766] Compiling C object drivers/librte_power_amd_pstate.a.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o
00:03:32.608  [586/766] Linking static target drivers/librte_power_amd_pstate.a
00:03:32.608  [587/766] Compiling C object drivers/librte_power_amd_pstate.so.25.0.p/meson-generated_.._rte_power_amd_pstate.pmd.c.o
00:03:32.608  [588/766] Compiling C object drivers/libtmp_rte_power_intel_pstate.a.p/power_intel_pstate_intel_pstate_cpufreq.c.o
00:03:32.608  [589/766] Linking static target drivers/libtmp_rte_power_intel_pstate.a
00:03:32.608  [590/766] Linking target drivers/librte_power_amd_pstate.so.25.0
00:03:32.608  [591/766] Generating drivers/rte_power_cppc.pmd.c with a custom command
00:03:32.608  [592/766] Compiling C object drivers/librte_power_cppc.a.p/meson-generated_.._rte_power_cppc.pmd.c.o
00:03:32.608  [593/766] Linking static target drivers/librte_power_cppc.a
00:03:32.608  [594/766] Compiling C object drivers/librte_power_cppc.so.25.0.p/meson-generated_.._rte_power_cppc.pmd.c.o
00:03:32.608  [595/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_guest_channel.c.o
00:03:32.608  [596/766] Linking target drivers/librte_power_cppc.so.25.0
00:03:32.866  [597/766] Generating drivers/rte_power_intel_pstate.pmd.c with a custom command
00:03:32.866  [598/766] Compiling C object drivers/librte_power_intel_pstate.a.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o
00:03:32.866  [599/766] Linking static target drivers/librte_power_intel_pstate.a
00:03:32.866  [600/766] Compiling C object drivers/librte_power_intel_pstate.so.25.0.p/meson-generated_.._rte_power_intel_pstate.pmd.c.o
00:03:32.866  [601/766] Compiling C object drivers/libtmp_rte_power_kvm_vm.a.p/power_kvm_vm_kvm_vm.c.o
00:03:32.866  [602/766] Linking static target drivers/libtmp_rte_power_kvm_vm.a
00:03:32.866  [603/766] Generating app/graph/commands_hdr with a custom command (wrapped by meson to capture output)
00:03:32.866  [604/766] Linking target drivers/librte_power_intel_pstate.so.25.0
00:03:32.867  [605/766] Compiling C object drivers/libtmp_rte_power_intel_uncore.a.p/power_intel_uncore_intel_uncore.c.o
00:03:32.867  [606/766] Linking static target drivers/libtmp_rte_power_intel_uncore.a
00:03:32.867  [607/766] Generating drivers/rte_power_kvm_vm.pmd.c with a custom command
00:03:33.125  [608/766] Compiling C object drivers/librte_power_kvm_vm.a.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o
00:03:33.125  [609/766] Linking static target drivers/librte_power_kvm_vm.a
00:03:33.125  [610/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o
00:03:33.125  [611/766] Compiling C object drivers/librte_power_kvm_vm.so.25.0.p/meson-generated_.._rte_power_kvm_vm.pmd.c.o
00:03:33.125  [612/766] Generating drivers/rte_power_intel_uncore.pmd.c with a custom command
00:03:33.125  [613/766] Compiling C object drivers/librte_power_intel_uncore.a.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o
00:03:33.125  [614/766] Linking static target drivers/librte_power_intel_uncore.a
00:03:33.125  [615/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o
00:03:33.125  [616/766] Compiling C object drivers/librte_power_intel_uncore.so.25.0.p/meson-generated_.._rte_power_intel_uncore.pmd.c.o
00:03:33.125  [617/766] Linking target drivers/librte_power_intel_uncore.so.25.0
00:03:33.384  [618/766] Generating drivers/rte_power_kvm_vm.sym_chk with a custom command (wrapped by meson to capture output)
00:03:33.384  [619/766] Compiling C object app/dpdk-graph.p/graph_cli.c.o
00:03:33.384  [620/766] Linking target drivers/librte_power_kvm_vm.so.25.0
00:03:33.384  [621/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o
00:03:33.384  [622/766] Compiling C object app/dpdk-graph.p/graph_conn.c.o
00:03:33.642  [623/766] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o
00:03:33.642  [624/766] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o
00:03:34.009  [625/766] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o
00:03:34.009  [626/766] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o
00:03:34.009  [627/766] Compiling C object app/dpdk-graph.p/graph_l2fwd.c.o
00:03:34.009  [628/766] Compiling C object app/dpdk-graph.p/graph_graph.c.o
00:03:34.009  [629/766] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o
00:03:34.009  [630/766] Linking static target drivers/libtmp_rte_net_i40e.a
00:03:34.009  [631/766] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o
00:03:34.009  [632/766] Compiling C object app/dpdk-graph.p/graph_main.c.o
00:03:34.289  [633/766] Generating drivers/rte_net_i40e.pmd.c with a custom command
00:03:34.289  [634/766] Compiling C object app/dpdk-graph.p/graph_mempool.c.o
00:03:34.289  [635/766] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:03:34.289  [636/766] Compiling C object drivers/librte_net_i40e.so.25.0.p/meson-generated_.._rte_net_i40e.pmd.c.o
00:03:34.289  [637/766] Linking static target drivers/librte_net_i40e.a
00:03:34.289  [638/766] Compiling C object app/dpdk-graph.p/graph_utils.c.o
00:03:34.289  [639/766] Compiling C object app/dpdk-graph.p/graph_neigh.c.o
00:03:34.548  [640/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o
00:03:34.548  [641/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o
00:03:34.548  [642/766] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o
00:03:34.548  [643/766] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o
00:03:34.806  [644/766] Compiling C object app/dpdk-pdump.p/pdump_main.c.o
00:03:35.065  [645/766] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o
00:03:35.065  [646/766] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output)
00:03:35.065  [647/766] Linking target drivers/librte_net_i40e.so.25.0
00:03:35.065  [648/766] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o
00:03:35.065  [649/766] Linking static target lib/librte_vhost.a
00:03:35.324  [650/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o
00:03:35.324  [651/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o
00:03:35.324  [652/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o
00:03:35.583  [653/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o
00:03:35.583  [654/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o
00:03:35.842  [655/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o
00:03:35.842  [656/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o
00:03:35.842  [657/766] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o
00:03:36.100  [658/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o
00:03:36.100  [659/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o
00:03:36.100  [660/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o
00:03:36.667  [661/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o
00:03:36.667  [662/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o
00:03:36.667  [663/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o
00:03:36.667  [664/766] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output)
00:03:36.667  [665/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o
00:03:36.667  [666/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o
00:03:36.667  [667/766] Linking target lib/librte_vhost.so.25.0
00:03:36.667  [668/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o
00:03:36.925  [669/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o
00:03:36.925  [670/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o
00:03:36.925  [671/766] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o
00:03:37.184  [672/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o
00:03:37.184  [673/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o
00:03:37.184  [674/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o
00:03:37.443  [675/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o
00:03:37.443  [676/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o
00:03:37.702  [677/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o
00:03:37.702  [678/766] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o
00:03:37.961  [679/766] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
00:03:37.961  [680/766] Linking static target lib/librte_pipeline.a
00:03:38.220  [681/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o
00:03:38.220  [682/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o
00:03:38.479  [683/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o
00:03:38.479  [684/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o
00:03:38.479  [685/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o
00:03:38.479  [686/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o
00:03:38.737  [687/766] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o
00:03:38.737  [688/766] Linking target app/dpdk-graph
00:03:38.737  [689/766] Linking target app/dpdk-proc-info
00:03:38.737  [690/766] Linking target app/dpdk-pdump
00:03:38.995  [691/766] Linking target app/dpdk-test-acl
00:03:38.996  [692/766] Linking target app/dpdk-test-compress-perf
00:03:38.996  [693/766] Linking target app/dpdk-test-cmdline
00:03:39.254  [694/766] Linking target app/dpdk-test-crypto-perf
00:03:39.254  [695/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o
00:03:39.254  [696/766] Linking target app/dpdk-test-dma-perf
00:03:39.254  [697/766] Linking target app/dpdk-test-fib
00:03:39.513  [698/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o
00:03:39.513  [699/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o
00:03:39.513  [700/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o
00:03:39.513  [701/766] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o
00:03:39.513  [702/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o
00:03:39.772  [703/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o
00:03:39.772  [704/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o
00:03:40.030  [705/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o
00:03:40.030  [706/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o
00:03:40.030  [707/766] Linking target app/dpdk-test-gpudev
00:03:40.031  [708/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o
00:03:40.289  [709/766] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o
00:03:40.289  [710/766] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o
00:03:40.289  [711/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o
00:03:40.289  [712/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o
00:03:40.289  [713/766] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o
00:03:40.289  [714/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o
00:03:40.547  [715/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o
00:03:40.547  [716/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o
00:03:40.806  [717/766] Linking target app/dpdk-test-flow-perf
00:03:40.806  [718/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o
00:03:40.806  [719/766] Linking target app/dpdk-test-eventdev
00:03:40.806  [720/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o
00:03:40.806  [721/766] Linking target app/dpdk-test-bbdev
00:03:41.064  [722/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o
00:03:41.064  [723/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o
00:03:41.064  [724/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o
00:03:41.064  [725/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o
00:03:41.322  [726/766] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output)
00:03:41.322  [727/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o
00:03:41.322  [728/766] Linking target lib/librte_pipeline.so.25.0
00:03:41.322  [729/766] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o
00:03:41.581  [730/766] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o
00:03:41.581  [731/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o
00:03:41.581  [732/766] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o
00:03:41.839  [733/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o
00:03:42.098  [734/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o
00:03:42.098  [735/766] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o
00:03:42.098  [736/766] Linking target app/dpdk-test-pipeline
00:03:42.098  [737/766] Linking target app/dpdk-test-mldev
00:03:42.098  [738/766] Compiling C object app/dpdk-testpmd.p/test-pmd_hairpin.c.o
00:03:42.665  [739/766] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o
00:03:42.665  [740/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o
00:03:42.665  [741/766] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o
00:03:42.665  [742/766] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o
00:03:42.665  [743/766] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o
00:03:42.665  [744/766] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o
00:03:42.923  [745/766] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o
00:03:43.181  [746/766] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o
00:03:43.181  [747/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o
00:03:43.181  [748/766] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o
00:03:43.439  [749/766] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o
00:03:43.439  [750/766] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o
00:03:43.698  [751/766] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o
00:03:43.957  [752/766] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o
00:03:43.957  [753/766] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o
00:03:43.957  [754/766] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o
00:03:43.957  [755/766] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o
00:03:43.957  [756/766] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o
00:03:44.216  [757/766] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o
00:03:44.216  [758/766] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o
00:03:44.216  [759/766] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o
00:03:44.474  [760/766] Linking target app/dpdk-test-regex
00:03:44.474  [761/766] Linking target app/dpdk-test-sad
00:03:44.474  [762/766] Compiling C object app/dpdk-test-security-perf.p/test_test_security_proto.c.o
00:03:44.733  [763/766] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o
00:03:44.733  [764/766] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o
00:03:44.991  [765/766] Linking target app/dpdk-test-security-perf
00:03:45.250  [766/766] Linking target app/dpdk-testpmd
00:03:45.250    04:53:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s
00:03:45.250   04:53:59 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]]
00:03:45.250   04:53:59 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install
00:03:45.250  ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp'
00:03:45.250  [0/1] Installing files.
00:03:45.510  Installing subdir /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/counters.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/memory.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/usertools/telemetry-endpoints/cpu.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/telemetry-endpoints
00:03:45.510  Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.510  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.511  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_eddsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipv6_addr_swap.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_skeleton.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_ipv4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_mpls.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/snippets/snippet_match_gre.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering/snippets
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:03:45.512  Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power
00:03:45.512  Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_argparse.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.512  Installing lib/librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:45.513  Installing lib/librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing lib/librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_acpi.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_amd_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_cppc.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_intel_pstate.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_intel_uncore.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing drivers/librte_power_kvm_vm.a to /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.082  Installing drivers/librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0
00:03:46.082  Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/lib/argparse/rte_argparse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.082  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitset.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore_var.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ptr_compress/rte_ptr_compress.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.083  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_cksum.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip4.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.084  Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/power_uncore_ops.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_cpufreq.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_qos.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.085  Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/drivers/power/kvm_vm/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry-exporter.py to /home/vagrant/spdk_repo/dpdk/build/bin
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:03:46.086  Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig
00:03:46.086  Installing symlink pointing to librte_log.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.25
00:03:46.086  Installing symlink pointing to librte_log.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so
00:03:46.086  Installing symlink pointing to librte_kvargs.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.25
00:03:46.086  Installing symlink pointing to librte_kvargs.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so
00:03:46.086  Installing symlink pointing to librte_argparse.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so.25
00:03:46.086  Installing symlink pointing to librte_argparse.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_argparse.so
00:03:46.086  Installing symlink pointing to librte_telemetry.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.25
00:03:46.086  Installing symlink pointing to librte_telemetry.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so
00:03:46.086  Installing symlink pointing to librte_eal.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.25
00:03:46.086  Installing symlink pointing to librte_eal.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so
00:03:46.086  Installing symlink pointing to librte_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.25
00:03:46.086  Installing symlink pointing to librte_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so
00:03:46.086  Installing symlink pointing to librte_rcu.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.25
00:03:46.086  Installing symlink pointing to librte_rcu.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so
00:03:46.086  Installing symlink pointing to librte_mempool.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.25
00:03:46.086  Installing symlink pointing to librte_mempool.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so
00:03:46.086  Installing symlink pointing to librte_mbuf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.25
00:03:46.086  Installing symlink pointing to librte_mbuf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so
00:03:46.086  Installing symlink pointing to librte_net.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.25
00:03:46.086  Installing symlink pointing to librte_net.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so
00:03:46.086  Installing symlink pointing to librte_meter.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.25
00:03:46.086  Installing symlink pointing to librte_meter.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so
00:03:46.086  Installing symlink pointing to librte_ethdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.25
00:03:46.086  Installing symlink pointing to librte_ethdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so
00:03:46.086  Installing symlink pointing to librte_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.25
00:03:46.086  Installing symlink pointing to librte_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so
00:03:46.086  Installing symlink pointing to librte_cmdline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.25
00:03:46.086  Installing symlink pointing to librte_cmdline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so
00:03:46.086  Installing symlink pointing to librte_metrics.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.25
00:03:46.086  Installing symlink pointing to librte_metrics.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so
00:03:46.086  Installing symlink pointing to librte_hash.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.25
00:03:46.086  Installing symlink pointing to librte_hash.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so
00:03:46.086  Installing symlink pointing to librte_timer.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.25
00:03:46.086  Installing symlink pointing to librte_timer.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so
00:03:46.086  Installing symlink pointing to librte_acl.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.25
00:03:46.086  Installing symlink pointing to librte_acl.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so
00:03:46.086  Installing symlink pointing to librte_bbdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.25
00:03:46.086  Installing symlink pointing to librte_bbdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so
00:03:46.086  Installing symlink pointing to librte_bitratestats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.25
00:03:46.086  Installing symlink pointing to librte_bitratestats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so
00:03:46.086  Installing symlink pointing to librte_bpf.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.25
00:03:46.086  Installing symlink pointing to librte_bpf.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so
00:03:46.086  Installing symlink pointing to librte_cfgfile.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.25
00:03:46.086  Installing symlink pointing to librte_cfgfile.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so
00:03:46.086  Installing symlink pointing to librte_compressdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.25
00:03:46.086  Installing symlink pointing to librte_compressdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so
00:03:46.086  Installing symlink pointing to librte_cryptodev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.25
00:03:46.086  Installing symlink pointing to librte_cryptodev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so
00:03:46.086  Installing symlink pointing to librte_distributor.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.25
00:03:46.086  Installing symlink pointing to librte_distributor.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so
00:03:46.086  Installing symlink pointing to librte_dmadev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.25
00:03:46.086  Installing symlink pointing to librte_dmadev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so
00:03:46.086  Installing symlink pointing to librte_efd.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.25
00:03:46.086  Installing symlink pointing to librte_efd.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so
00:03:46.086  Installing symlink pointing to librte_eventdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.25
00:03:46.086  Installing symlink pointing to librte_eventdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so
00:03:46.086  Installing symlink pointing to librte_dispatcher.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.25
00:03:46.086  Installing symlink pointing to librte_dispatcher.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so
00:03:46.086  Installing symlink pointing to librte_gpudev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.25
00:03:46.086  Installing symlink pointing to librte_gpudev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so
00:03:46.086  Installing symlink pointing to librte_gro.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.25
00:03:46.086  Installing symlink pointing to librte_gro.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so
00:03:46.086  Installing symlink pointing to librte_gso.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.25
00:03:46.086  Installing symlink pointing to librte_gso.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so
00:03:46.086  Installing symlink pointing to librte_ip_frag.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.25
00:03:46.086  Installing symlink pointing to librte_ip_frag.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so
00:03:46.086  Installing symlink pointing to librte_jobstats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.25
00:03:46.086  Installing symlink pointing to librte_jobstats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so
00:03:46.086  Installing symlink pointing to librte_latencystats.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.25
00:03:46.086  Installing symlink pointing to librte_latencystats.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so
00:03:46.086  Installing symlink pointing to librte_lpm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.25
00:03:46.086  Installing symlink pointing to librte_lpm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so
00:03:46.086  Installing symlink pointing to librte_member.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.25
00:03:46.086  Installing symlink pointing to librte_member.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so
00:03:46.086  Installing symlink pointing to librte_pcapng.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.25
00:03:46.086  Installing symlink pointing to librte_pcapng.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so
00:03:46.086  Installing symlink pointing to librte_power.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.25
00:03:46.086  Installing symlink pointing to librte_power.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so
00:03:46.086  Installing symlink pointing to librte_rawdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.25
00:03:46.086  Installing symlink pointing to librte_rawdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so
00:03:46.086  Installing symlink pointing to librte_regexdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.25
00:03:46.086  Installing symlink pointing to librte_regexdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so
00:03:46.086  Installing symlink pointing to librte_mldev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.25
00:03:46.086  Installing symlink pointing to librte_mldev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so
00:03:46.087  Installing symlink pointing to librte_rib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.25
00:03:46.087  Installing symlink pointing to librte_rib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so
00:03:46.087  Installing symlink pointing to librte_reorder.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.25
00:03:46.087  Installing symlink pointing to librte_reorder.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so
00:03:46.087  Installing symlink pointing to librte_sched.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.25
00:03:46.087  Installing symlink pointing to librte_sched.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so
00:03:46.087  Installing symlink pointing to librte_security.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.25
00:03:46.087  Installing symlink pointing to librte_security.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so
00:03:46.087  Installing symlink pointing to librte_stack.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.25
00:03:46.087  Installing symlink pointing to librte_stack.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so
00:03:46.087  Installing symlink pointing to librte_vhost.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.25
00:03:46.087  Installing symlink pointing to librte_vhost.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so
00:03:46.087  Installing symlink pointing to librte_ipsec.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.25
00:03:46.087  Installing symlink pointing to librte_ipsec.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so
00:03:46.087  Installing symlink pointing to librte_pdcp.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.25
00:03:46.087  Installing symlink pointing to librte_pdcp.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so
00:03:46.087  Installing symlink pointing to librte_fib.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.25
00:03:46.087  Installing symlink pointing to librte_fib.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so
00:03:46.087  Installing symlink pointing to librte_port.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.25
00:03:46.087  Installing symlink pointing to librte_port.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so
00:03:46.087  Installing symlink pointing to librte_pdump.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.25
00:03:46.087  Installing symlink pointing to librte_pdump.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so
00:03:46.087  Installing symlink pointing to librte_table.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.25
00:03:46.087  Installing symlink pointing to librte_table.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so
00:03:46.087  Installing symlink pointing to librte_pipeline.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.25
00:03:46.087  Installing symlink pointing to librte_pipeline.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so
00:03:46.087  Installing symlink pointing to librte_graph.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.25
00:03:46.087  Installing symlink pointing to librte_graph.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so
00:03:46.087  Installing symlink pointing to librte_node.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.25
00:03:46.087  Installing symlink pointing to librte_node.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so
00:03:46.087  Installing symlink pointing to librte_bus_pci.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so.25
00:03:46.087  Installing symlink pointing to librte_bus_pci.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_pci.so
00:03:46.087  Installing symlink pointing to librte_bus_vdev.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so.25
00:03:46.087  Installing symlink pointing to librte_bus_vdev.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_bus_vdev.so
00:03:46.087  Installing symlink pointing to librte_mempool_ring.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so.25
00:03:46.087  Installing symlink pointing to librte_mempool_ring.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_mempool_ring.so
00:03:46.087  Installing symlink pointing to librte_net_i40e.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so.25
00:03:46.087  './librte_bus_pci.so' -> 'dpdk/pmds-25.0/librte_bus_pci.so'
00:03:46.087  './librte_bus_pci.so.25' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25'
00:03:46.087  './librte_bus_pci.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_pci.so.25.0'
00:03:46.087  './librte_bus_vdev.so' -> 'dpdk/pmds-25.0/librte_bus_vdev.so'
00:03:46.087  './librte_bus_vdev.so.25' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25'
00:03:46.087  './librte_bus_vdev.so.25.0' -> 'dpdk/pmds-25.0/librte_bus_vdev.so.25.0'
00:03:46.087  './librte_mempool_ring.so' -> 'dpdk/pmds-25.0/librte_mempool_ring.so'
00:03:46.087  './librte_mempool_ring.so.25' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25'
00:03:46.087  './librte_mempool_ring.so.25.0' -> 'dpdk/pmds-25.0/librte_mempool_ring.so.25.0'
00:03:46.087  './librte_net_i40e.so' -> 'dpdk/pmds-25.0/librte_net_i40e.so'
00:03:46.087  './librte_net_i40e.so.25' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25'
00:03:46.087  './librte_net_i40e.so.25.0' -> 'dpdk/pmds-25.0/librte_net_i40e.so.25.0'
00:03:46.087  './librte_power_acpi.so' -> 'dpdk/pmds-25.0/librte_power_acpi.so'
00:03:46.087  './librte_power_acpi.so.25' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25'
00:03:46.087  './librte_power_acpi.so.25.0' -> 'dpdk/pmds-25.0/librte_power_acpi.so.25.0'
00:03:46.087  './librte_power_amd_pstate.so' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so'
00:03:46.087  './librte_power_amd_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25'
00:03:46.087  './librte_power_amd_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_amd_pstate.so.25.0'
00:03:46.087  './librte_power_cppc.so' -> 'dpdk/pmds-25.0/librte_power_cppc.so'
00:03:46.087  './librte_power_cppc.so.25' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25'
00:03:46.087  './librte_power_cppc.so.25.0' -> 'dpdk/pmds-25.0/librte_power_cppc.so.25.0'
00:03:46.087  './librte_power_intel_pstate.so' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so'
00:03:46.087  './librte_power_intel_pstate.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25'
00:03:46.087  './librte_power_intel_pstate.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_pstate.so.25.0'
00:03:46.087  './librte_power_intel_uncore.so' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so'
00:03:46.087  './librte_power_intel_uncore.so.25' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25'
00:03:46.087  './librte_power_intel_uncore.so.25.0' -> 'dpdk/pmds-25.0/librte_power_intel_uncore.so.25.0'
00:03:46.087  './librte_power_kvm_vm.so' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so'
00:03:46.087  './librte_power_kvm_vm.so.25' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25'
00:03:46.087  './librte_power_kvm_vm.so.25.0' -> 'dpdk/pmds-25.0/librte_power_kvm_vm.so.25.0'
00:03:46.087  Installing symlink pointing to librte_net_i40e.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_net_i40e.so
00:03:46.087  Installing symlink pointing to librte_power_acpi.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so.25
00:03:46.087  Installing symlink pointing to librte_power_acpi.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_acpi.so
00:03:46.087  Installing symlink pointing to librte_power_amd_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so.25
00:03:46.087  Installing symlink pointing to librte_power_amd_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_amd_pstate.so
00:03:46.087  Installing symlink pointing to librte_power_cppc.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so.25
00:03:46.087  Installing symlink pointing to librte_power_cppc.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_cppc.so
00:03:46.087  Installing symlink pointing to librte_power_intel_pstate.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so.25
00:03:46.087  Installing symlink pointing to librte_power_intel_pstate.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_pstate.so
00:03:46.087  Installing symlink pointing to librte_power_intel_uncore.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so.25
00:03:46.087  Installing symlink pointing to librte_power_intel_uncore.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_intel_uncore.so
00:03:46.087  Installing symlink pointing to librte_power_kvm_vm.so.25.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so.25
00:03:46.087  Installing symlink pointing to librte_power_kvm_vm.so.25 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-25.0/librte_power_kvm_vm.so
00:03:46.087  Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-25.0'
00:03:46.087   04:54:00 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat
00:03:46.087   04:54:00 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /home/vagrant/spdk_repo/spdk
00:03:46.087  
00:03:46.087  real	0m50.297s
00:03:46.087  user	6m10.211s
00:03:46.087  sys	0m53.056s
00:03:46.087   04:54:00 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:46.087   04:54:00 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x
00:03:46.087  ************************************
00:03:46.087  END TEST build_native_dpdk
00:03:46.087  ************************************
00:03:46.346   04:54:00  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:03:46.346   04:54:00  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:03:46.346   04:54:00  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:03:46.346   04:54:00  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:03:46.346   04:54:00  -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:03:46.346   04:54:00  -- spdk/autobuild.sh@58 -- $ unittest_build
00:03:46.346   04:54:00  -- common/autobuild_common.sh@433 -- $ run_test unittest_build _unittest_build
00:03:46.346   04:54:00  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:03:46.346   04:54:00  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:03:46.346   04:54:00  -- common/autotest_common.sh@10 -- $ set +x
00:03:46.346  ************************************
00:03:46.346  START TEST unittest_build
00:03:46.346  ************************************
00:03:46.346   04:54:00 unittest_build -- common/autotest_common.sh@1129 -- $ _unittest_build
00:03:46.346   04:54:00 unittest_build -- common/autobuild_common.sh@424 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --enable-ubsan --enable-asan --enable-coverage --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --without-shared
00:03:46.346  Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs...
00:03:46.346  DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib
00:03:46.346  DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include
00:03:46.346  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:03:46.604  Using 'verbs' RDMA provider
00:03:59.827  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:04:12.034  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:04:12.034  Creating mk/config.mk...done.
00:04:12.034  Creating mk/cc.flags.mk...done.
00:04:12.034  Type 'make' to build.
00:04:12.034   04:54:25 unittest_build -- common/autobuild_common.sh@425 -- $ make -j10
00:04:12.293  make[1]: Nothing to be done for 'all'.
00:04:44.368    CC lib/log/log_flags.o
00:04:44.368    CC lib/log/log.o
00:04:44.368    CC lib/ut/ut.o
00:04:44.368    CC lib/log/log_deprecated.o
00:04:44.369    CC lib/ut_mock/mock.o
00:04:44.369    LIB libspdk_log.a
00:04:44.369    LIB libspdk_ut_mock.a
00:04:44.369    LIB libspdk_ut.a
00:04:44.369    CC lib/ioat/ioat.o
00:04:44.369    CC lib/dma/dma.o
00:04:44.369    CXX lib/trace_parser/trace.o
00:04:44.369    CC lib/util/base64.o
00:04:44.369    CC lib/util/bit_array.o
00:04:44.369    CC lib/util/cpuset.o
00:04:44.369    CC lib/util/crc16.o
00:04:44.369    CC lib/util/crc32.o
00:04:44.369    CC lib/util/crc32c.o
00:04:44.369    CC lib/vfio_user/host/vfio_user_pci.o
00:04:44.369    CC lib/vfio_user/host/vfio_user.o
00:04:44.369    CC lib/util/crc32_ieee.o
00:04:44.369    CC lib/util/crc64.o
00:04:44.369    CC lib/util/dif.o
00:04:44.369    CC lib/util/fd.o
00:04:44.369    LIB libspdk_dma.a
00:04:44.369    CC lib/util/fd_group.o
00:04:44.369    CC lib/util/file.o
00:04:44.369    CC lib/util/hexlify.o
00:04:44.369    CC lib/util/iov.o
00:04:44.369    LIB libspdk_ioat.a
00:04:44.369    CC lib/util/math.o
00:04:44.369    CC lib/util/net.o
00:04:44.369    CC lib/util/pipe.o
00:04:44.369    LIB libspdk_vfio_user.a
00:04:44.369    CC lib/util/strerror_tls.o
00:04:44.369    CC lib/util/string.o
00:04:44.369    CC lib/util/uuid.o
00:04:44.369    CC lib/util/xor.o
00:04:44.369    CC lib/util/zipf.o
00:04:44.369    CC lib/util/md5.o
00:04:44.369    LIB libspdk_util.a
00:04:44.369    CC lib/conf/conf.o
00:04:44.369    CC lib/rdma_utils/rdma_utils.o
00:04:44.369    CC lib/json/json_parse.o
00:04:44.369    CC lib/json/json_util.o
00:04:44.369    CC lib/json/json_write.o
00:04:44.369    CC lib/env_dpdk/memory.o
00:04:44.369    CC lib/env_dpdk/env.o
00:04:44.369    CC lib/vmd/vmd.o
00:04:44.369    LIB libspdk_trace_parser.a
00:04:44.369    CC lib/idxd/idxd.o
00:04:44.369    CC lib/vmd/led.o
00:04:44.369    LIB libspdk_conf.a
00:04:44.369    CC lib/env_dpdk/pci.o
00:04:44.369    CC lib/env_dpdk/init.o
00:04:44.369    CC lib/env_dpdk/threads.o
00:04:44.369    CC lib/env_dpdk/pci_ioat.o
00:04:44.369    LIB libspdk_rdma_utils.a
00:04:44.369    LIB libspdk_json.a
00:04:44.369    CC lib/env_dpdk/pci_virtio.o
00:04:44.369    CC lib/env_dpdk/pci_vmd.o
00:04:44.369    CC lib/env_dpdk/pci_idxd.o
00:04:44.369    CC lib/env_dpdk/pci_event.o
00:04:44.369    CC lib/env_dpdk/sigbus_handler.o
00:04:44.369    CC lib/env_dpdk/pci_dpdk.o
00:04:44.369    CC lib/env_dpdk/pci_dpdk_2207.o
00:04:44.369    CC lib/env_dpdk/pci_dpdk_2211.o
00:04:44.369    CC lib/idxd/idxd_user.o
00:04:44.369    CC lib/rdma_provider/rdma_provider_verbs.o
00:04:44.369    CC lib/rdma_provider/common.o
00:04:44.369    LIB libspdk_vmd.a
00:04:44.369    LIB libspdk_idxd.a
00:04:44.369    LIB libspdk_rdma_provider.a
00:04:44.369    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:04:44.369    CC lib/jsonrpc/jsonrpc_server.o
00:04:44.369    CC lib/jsonrpc/jsonrpc_client.o
00:04:44.369    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:04:44.369    LIB libspdk_jsonrpc.a
00:04:44.369    CC lib/rpc/rpc.o
00:04:44.369    LIB libspdk_env_dpdk.a
00:04:44.369    LIB libspdk_rpc.a
00:04:44.628    CC lib/notify/notify.o
00:04:44.628    CC lib/notify/notify_rpc.o
00:04:44.628    CC lib/keyring/keyring.o
00:04:44.628    CC lib/keyring/keyring_rpc.o
00:04:44.628    CC lib/trace/trace.o
00:04:44.628    CC lib/trace/trace_flags.o
00:04:44.628    CC lib/trace/trace_rpc.o
00:04:44.886    LIB libspdk_notify.a
00:04:44.887    LIB libspdk_keyring.a
00:04:44.887    LIB libspdk_trace.a
00:04:45.145    CC lib/sock/sock.o
00:04:45.145    CC lib/sock/sock_rpc.o
00:04:45.145    CC lib/thread/thread.o
00:04:45.145    CC lib/thread/iobuf.o
00:04:45.714    LIB libspdk_sock.a
00:04:45.973    CC lib/nvme/nvme_ctrlr_cmd.o
00:04:45.973    CC lib/nvme/nvme_ctrlr.o
00:04:45.973    CC lib/nvme/nvme_fabric.o
00:04:45.973    CC lib/nvme/nvme_ns_cmd.o
00:04:45.973    CC lib/nvme/nvme_ns.o
00:04:45.973    CC lib/nvme/nvme_pcie_common.o
00:04:45.973    CC lib/nvme/nvme_pcie.o
00:04:45.973    CC lib/nvme/nvme_qpair.o
00:04:45.973    CC lib/nvme/nvme.o
00:04:46.540    CC lib/nvme/nvme_quirks.o
00:04:46.540    CC lib/nvme/nvme_transport.o
00:04:46.540    CC lib/nvme/nvme_discovery.o
00:04:46.799    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:04:46.799    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:04:46.799    CC lib/nvme/nvme_tcp.o
00:04:46.799    CC lib/nvme/nvme_opal.o
00:04:47.058    LIB libspdk_thread.a
00:04:47.058    CC lib/nvme/nvme_io_msg.o
00:04:47.058    CC lib/nvme/nvme_poll_group.o
00:04:47.058    CC lib/accel/accel.o
00:04:47.058    CC lib/accel/accel_rpc.o
00:04:47.058    CC lib/accel/accel_sw.o
00:04:47.317    CC lib/nvme/nvme_zns.o
00:04:47.317    CC lib/nvme/nvme_stubs.o
00:04:47.317    CC lib/nvme/nvme_auth.o
00:04:47.576    CC lib/nvme/nvme_cuse.o
00:04:47.576    CC lib/nvme/nvme_rdma.o
00:04:47.576    CC lib/blob/blobstore.o
00:04:47.576    CC lib/blob/request.o
00:04:47.835    CC lib/blob/zeroes.o
00:04:47.835    CC lib/blob/blob_bs_dev.o
00:04:48.094    CC lib/init/json_config.o
00:04:48.094    CC lib/virtio/virtio.o
00:04:48.094    CC lib/fsdev/fsdev.o
00:04:48.094    CC lib/fsdev/fsdev_io.o
00:04:48.353    CC lib/virtio/virtio_vhost_user.o
00:04:48.353    CC lib/init/subsystem.o
00:04:48.353    LIB libspdk_accel.a
00:04:48.353    CC lib/init/subsystem_rpc.o
00:04:48.353    CC lib/init/rpc.o
00:04:48.353    CC lib/virtio/virtio_vfio_user.o
00:04:48.353    CC lib/virtio/virtio_pci.o
00:04:48.612    CC lib/fsdev/fsdev_rpc.o
00:04:48.612    LIB libspdk_init.a
00:04:48.612    CC lib/bdev/bdev.o
00:04:48.612    CC lib/bdev/bdev_rpc.o
00:04:48.612    CC lib/bdev/bdev_zone.o
00:04:48.612    CC lib/bdev/part.o
00:04:48.612    CC lib/bdev/scsi_nvme.o
00:04:48.612    CC lib/event/app.o
00:04:48.872    LIB libspdk_virtio.a
00:04:48.872    CC lib/event/reactor.o
00:04:48.872    CC lib/event/log_rpc.o
00:04:48.872    LIB libspdk_nvme.a
00:04:48.872    LIB libspdk_fsdev.a
00:04:48.872    CC lib/event/app_rpc.o
00:04:48.872    CC lib/event/scheduler_static.o
00:04:49.130    CC lib/fuse_dispatcher/fuse_dispatcher.o
00:04:49.389    LIB libspdk_event.a
00:04:49.649    LIB libspdk_fuse_dispatcher.a
00:04:51.028    LIB libspdk_blob.a
00:04:51.028    CC lib/blobfs/tree.o
00:04:51.028    CC lib/blobfs/blobfs.o
00:04:51.287    CC lib/lvol/lvol.o
00:04:51.287    LIB libspdk_bdev.a
00:04:51.545    CC lib/nbd/nbd.o
00:04:51.545    CC lib/nbd/nbd_rpc.o
00:04:51.545    CC lib/scsi/dev.o
00:04:51.545    CC lib/scsi/lun.o
00:04:51.545    CC lib/scsi/scsi.o
00:04:51.546    CC lib/scsi/port.o
00:04:51.546    CC lib/ftl/ftl_core.o
00:04:51.546    CC lib/nvmf/ctrlr.o
00:04:51.804    CC lib/scsi/scsi_bdev.o
00:04:51.804    CC lib/nvmf/ctrlr_discovery.o
00:04:51.804    CC lib/nvmf/ctrlr_bdev.o
00:04:51.804    CC lib/nvmf/subsystem.o
00:04:51.804    CC lib/nvmf/nvmf.o
00:04:52.064    LIB libspdk_blobfs.a
00:04:52.064    CC lib/ftl/ftl_init.o
00:04:52.064    LIB libspdk_nbd.a
00:04:52.064    CC lib/ftl/ftl_layout.o
00:04:52.064    CC lib/ftl/ftl_debug.o
00:04:52.323    LIB libspdk_lvol.a
00:04:52.323    CC lib/nvmf/nvmf_rpc.o
00:04:52.323    CC lib/scsi/scsi_pr.o
00:04:52.323    CC lib/scsi/scsi_rpc.o
00:04:52.323    CC lib/scsi/task.o
00:04:52.323    CC lib/nvmf/transport.o
00:04:52.323    CC lib/ftl/ftl_io.o
00:04:52.582    CC lib/ftl/ftl_sb.o
00:04:52.582    CC lib/ftl/ftl_l2p.o
00:04:52.582    CC lib/nvmf/tcp.o
00:04:52.582    LIB libspdk_scsi.a
00:04:52.582    CC lib/nvmf/stubs.o
00:04:52.841    CC lib/nvmf/mdns_server.o
00:04:52.841    CC lib/nvmf/rdma.o
00:04:52.841    CC lib/ftl/ftl_l2p_flat.o
00:04:52.841    CC lib/ftl/ftl_nv_cache.o
00:04:53.100    CC lib/nvmf/auth.o
00:04:53.100    CC lib/iscsi/conn.o
00:04:53.100    CC lib/ftl/ftl_band.o
00:04:53.359    CC lib/ftl/ftl_band_ops.o
00:04:53.359    CC lib/vhost/vhost.o
00:04:53.359    CC lib/vhost/vhost_rpc.o
00:04:53.619    CC lib/vhost/vhost_scsi.o
00:04:53.619    CC lib/vhost/vhost_blk.o
00:04:53.619    CC lib/vhost/rte_vhost_user.o
00:04:53.878    CC lib/iscsi/init_grp.o
00:04:53.878    CC lib/iscsi/iscsi.o
00:04:54.138    CC lib/ftl/ftl_writer.o
00:04:54.138    CC lib/ftl/ftl_rq.o
00:04:54.138    CC lib/ftl/ftl_reloc.o
00:04:54.138    CC lib/iscsi/param.o
00:04:54.138    CC lib/iscsi/portal_grp.o
00:04:54.397    CC lib/iscsi/tgt_node.o
00:04:54.397    CC lib/iscsi/iscsi_subsystem.o
00:04:54.397    CC lib/ftl/ftl_l2p_cache.o
00:04:54.397    CC lib/ftl/ftl_p2l.o
00:04:54.660    CC lib/ftl/ftl_p2l_log.o
00:04:54.660    CC lib/iscsi/iscsi_rpc.o
00:04:54.660    CC lib/ftl/mngt/ftl_mngt.o
00:04:54.660    LIB libspdk_vhost.a
00:04:54.660    CC lib/ftl/mngt/ftl_mngt_bdev.o
00:04:54.937    CC lib/iscsi/task.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_shutdown.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_startup.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_md.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_misc.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_ioch.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_l2p.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_band.o
00:04:54.937    CC lib/ftl/mngt/ftl_mngt_self_test.o
00:04:55.207    CC lib/ftl/mngt/ftl_mngt_p2l.o
00:04:55.207    CC lib/ftl/mngt/ftl_mngt_recovery.o
00:04:55.207    CC lib/ftl/mngt/ftl_mngt_upgrade.o
00:04:55.207    CC lib/ftl/utils/ftl_conf.o
00:04:55.207    LIB libspdk_nvmf.a
00:04:55.207    CC lib/ftl/utils/ftl_md.o
00:04:55.207    CC lib/ftl/utils/ftl_mempool.o
00:04:55.207    CC lib/ftl/utils/ftl_bitmap.o
00:04:55.207    CC lib/ftl/utils/ftl_property.o
00:04:55.465    CC lib/ftl/utils/ftl_layout_tracker_bdev.o
00:04:55.465    CC lib/ftl/upgrade/ftl_layout_upgrade.o
00:04:55.465    CC lib/ftl/upgrade/ftl_sb_upgrade.o
00:04:55.465    CC lib/ftl/upgrade/ftl_p2l_upgrade.o
00:04:55.465    CC lib/ftl/upgrade/ftl_band_upgrade.o
00:04:55.465    CC lib/ftl/upgrade/ftl_chunk_upgrade.o
00:04:55.465    LIB libspdk_iscsi.a
00:04:55.465    CC lib/ftl/upgrade/ftl_trim_upgrade.o
00:04:55.465    CC lib/ftl/upgrade/ftl_sb_v3.o
00:04:55.465    CC lib/ftl/upgrade/ftl_sb_v5.o
00:04:55.722    CC lib/ftl/nvc/ftl_nvc_dev.o
00:04:55.722    CC lib/ftl/nvc/ftl_nvc_bdev_vss.o
00:04:55.722    CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o
00:04:55.722    CC lib/ftl/nvc/ftl_nvc_bdev_common.o
00:04:55.722    CC lib/ftl/base/ftl_base_dev.o
00:04:55.722    CC lib/ftl/base/ftl_base_bdev.o
00:04:55.722    CC lib/ftl/ftl_trace.o
00:04:55.981    LIB libspdk_ftl.a
00:04:56.549    CC module/env_dpdk/env_dpdk_rpc.o
00:04:56.549    CC module/sock/posix/posix.o
00:04:56.550    CC module/accel/dsa/accel_dsa.o
00:04:56.550    CC module/blob/bdev/blob_bdev.o
00:04:56.550    CC module/accel/error/accel_error.o
00:04:56.550    CC module/accel/ioat/accel_ioat.o
00:04:56.550    CC module/keyring/file/keyring.o
00:04:56.550    CC module/scheduler/dynamic/scheduler_dynamic.o
00:04:56.550    CC module/fsdev/aio/fsdev_aio.o
00:04:56.550    CC module/accel/iaa/accel_iaa.o
00:04:56.550    LIB libspdk_env_dpdk_rpc.a
00:04:56.550    CC module/accel/ioat/accel_ioat_rpc.o
00:04:56.550    CC module/accel/iaa/accel_iaa_rpc.o
00:04:56.550    LIB libspdk_scheduler_dynamic.a
00:04:56.550    CC module/keyring/file/keyring_rpc.o
00:04:56.550    CC module/accel/error/accel_error_rpc.o
00:04:56.808    LIB libspdk_accel_ioat.a
00:04:56.808    LIB libspdk_blob_bdev.a
00:04:56.808    CC module/accel/dsa/accel_dsa_rpc.o
00:04:56.808    CC module/fsdev/aio/fsdev_aio_rpc.o
00:04:56.808    LIB libspdk_accel_iaa.a
00:04:56.808    CC module/fsdev/aio/linux_aio_mgr.o
00:04:56.808    CC module/scheduler/dpdk_governor/dpdk_governor.o
00:04:56.808    LIB libspdk_keyring_file.a
00:04:56.808    CC module/scheduler/gscheduler/gscheduler.o
00:04:56.808    LIB libspdk_accel_error.a
00:04:56.808    LIB libspdk_accel_dsa.a
00:04:57.067    LIB libspdk_scheduler_dpdk_governor.a
00:04:57.067    CC module/keyring/linux/keyring.o
00:04:57.067    CC module/bdev/delay/vbdev_delay.o
00:04:57.067    LIB libspdk_scheduler_gscheduler.a
00:04:57.067    CC module/bdev/error/vbdev_error.o
00:04:57.067    CC module/bdev/error/vbdev_error_rpc.o
00:04:57.067    CC module/bdev/malloc/bdev_malloc.o
00:04:57.067    CC module/bdev/gpt/gpt.o
00:04:57.067    CC module/keyring/linux/keyring_rpc.o
00:04:57.067    CC module/bdev/lvol/vbdev_lvol.o
00:04:57.326    LIB libspdk_fsdev_aio.a
00:04:57.326    CC module/bdev/gpt/vbdev_gpt.o
00:04:57.326    CC module/blobfs/bdev/blobfs_bdev.o
00:04:57.326    LIB libspdk_keyring_linux.a
00:04:57.326    LIB libspdk_sock_posix.a
00:04:57.326    LIB libspdk_bdev_error.a
00:04:57.326    CC module/bdev/delay/vbdev_delay_rpc.o
00:04:57.326    CC module/bdev/null/bdev_null.o
00:04:57.326    CC module/bdev/passthru/vbdev_passthru.o
00:04:57.326    CC module/bdev/raid/bdev_raid.o
00:04:57.586    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:04:57.586    CC module/bdev/nvme/bdev_nvme.o
00:04:57.586    CC module/bdev/malloc/bdev_malloc_rpc.o
00:04:57.586    LIB libspdk_bdev_gpt.a
00:04:57.586    CC module/bdev/split/vbdev_split.o
00:04:57.586    LIB libspdk_bdev_delay.a
00:04:57.586    CC module/bdev/split/vbdev_split_rpc.o
00:04:57.586    LIB libspdk_blobfs_bdev.a
00:04:57.586    CC module/bdev/nvme/bdev_nvme_rpc.o
00:04:57.586    CC module/bdev/nvme/nvme_rpc.o
00:04:57.586    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:04:57.586    CC module/bdev/null/bdev_null_rpc.o
00:04:57.845    LIB libspdk_bdev_malloc.a
00:04:57.845    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:04:57.845    CC module/bdev/nvme/bdev_mdns_client.o
00:04:57.845    LIB libspdk_bdev_split.a
00:04:57.845    CC module/bdev/raid/bdev_raid_rpc.o
00:04:57.845    LIB libspdk_bdev_null.a
00:04:57.845    CC module/bdev/zone_block/vbdev_zone_block.o
00:04:57.845    CC module/bdev/nvme/vbdev_opal.o
00:04:57.845    LIB libspdk_bdev_passthru.a
00:04:58.105    CC module/bdev/ftl/bdev_ftl.o
00:04:58.105    CC module/bdev/aio/bdev_aio.o
00:04:58.105    LIB libspdk_bdev_lvol.a
00:04:58.105    CC module/bdev/ftl/bdev_ftl_rpc.o
00:04:58.105    CC module/bdev/aio/bdev_aio_rpc.o
00:04:58.105    CC module/bdev/iscsi/bdev_iscsi.o
00:04:58.364    CC module/bdev/iscsi/bdev_iscsi_rpc.o
00:04:58.364    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:04:58.364    CC module/bdev/raid/bdev_raid_sb.o
00:04:58.364    CC module/bdev/nvme/vbdev_opal_rpc.o
00:04:58.364    LIB libspdk_bdev_ftl.a
00:04:58.364    CC module/bdev/nvme/bdev_nvme_cuse_rpc.o
00:04:58.364    CC module/bdev/raid/raid0.o
00:04:58.364    CC module/bdev/raid/raid1.o
00:04:58.364    LIB libspdk_bdev_aio.a
00:04:58.364    LIB libspdk_bdev_zone_block.a
00:04:58.623    CC module/bdev/raid/concat.o
00:04:58.623    LIB libspdk_bdev_iscsi.a
00:04:58.623    CC module/bdev/virtio/bdev_virtio_scsi.o
00:04:58.623    CC module/bdev/virtio/bdev_virtio_rpc.o
00:04:58.623    CC module/bdev/virtio/bdev_virtio_blk.o
00:04:58.881    LIB libspdk_bdev_raid.a
00:04:59.140    LIB libspdk_bdev_virtio.a
00:05:00.076    LIB libspdk_bdev_nvme.a
00:05:00.642    CC module/event/subsystems/iobuf/iobuf.o
00:05:00.643    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:05:00.643    CC module/event/subsystems/keyring/keyring.o
00:05:00.643    CC module/event/subsystems/vmd/vmd.o
00:05:00.643    CC module/event/subsystems/vmd/vmd_rpc.o
00:05:00.643    CC module/event/subsystems/scheduler/scheduler.o
00:05:00.643    CC module/event/subsystems/sock/sock.o
00:05:00.643    CC module/event/subsystems/vhost_blk/vhost_blk.o
00:05:00.643    CC module/event/subsystems/fsdev/fsdev.o
00:05:00.643    LIB libspdk_event_keyring.a
00:05:00.643    LIB libspdk_event_vmd.a
00:05:00.643    LIB libspdk_event_fsdev.a
00:05:00.643    LIB libspdk_event_scheduler.a
00:05:00.643    LIB libspdk_event_vhost_blk.a
00:05:00.643    LIB libspdk_event_iobuf.a
00:05:00.643    LIB libspdk_event_sock.a
00:05:00.901    CC module/event/subsystems/accel/accel.o
00:05:01.161    LIB libspdk_event_accel.a
00:05:01.161    CC module/event/subsystems/bdev/bdev.o
00:05:01.419    LIB libspdk_event_bdev.a
00:05:01.678    CC module/event/subsystems/nbd/nbd.o
00:05:01.678    CC module/event/subsystems/scsi/scsi.o
00:05:01.678    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:05:01.678    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:05:01.937    LIB libspdk_event_nbd.a
00:05:01.937    LIB libspdk_event_scsi.a
00:05:01.937    LIB libspdk_event_nvmf.a
00:05:02.195    CC module/event/subsystems/vhost_scsi/vhost_scsi.o
00:05:02.195    CC module/event/subsystems/iscsi/iscsi.o
00:05:02.195    LIB libspdk_event_vhost_scsi.a
00:05:02.195    LIB libspdk_event_iscsi.a
00:05:02.454    CC app/trace_record/trace_record.o
00:05:02.454    CXX app/trace/trace.o
00:05:02.454    CC examples/interrupt_tgt/interrupt_tgt.o
00:05:02.454    CC app/iscsi_tgt/iscsi_tgt.o
00:05:02.454    CC app/nvmf_tgt/nvmf_main.o
00:05:02.713    CC examples/util/zipf/zipf.o
00:05:02.713    CC examples/ioat/perf/perf.o
00:05:02.713    CC test/thread/poller_perf/poller_perf.o
00:05:02.713    CC app/spdk_tgt/spdk_tgt.o
00:05:02.713    CC test/dma/test_dma/test_dma.o
00:05:02.713    LINK poller_perf
00:05:02.713    LINK interrupt_tgt
00:05:02.713    LINK nvmf_tgt
00:05:02.713    LINK zipf
00:05:02.998    LINK iscsi_tgt
00:05:02.998    LINK spdk_trace_record
00:05:02.998    LINK spdk_tgt
00:05:02.998    LINK ioat_perf
00:05:02.998    LINK spdk_trace
00:05:03.257    LINK test_dma
00:05:03.515    CC app/spdk_lspci/spdk_lspci.o
00:05:03.515    CC app/spdk_nvme_perf/perf.o
00:05:03.515    LINK spdk_lspci
00:05:03.515    CC examples/ioat/verify/verify.o
00:05:03.774    CC examples/thread/thread/thread_ex.o
00:05:03.774    CC test/thread/lock/spdk_lock.o
00:05:03.774    LINK verify
00:05:04.032    LINK thread
00:05:04.291    LINK spdk_nvme_perf
00:05:04.550    CC app/spdk_nvme_identify/identify.o
00:05:04.550    CC examples/sock/hello_world/hello_sock.o
00:05:04.809    CC examples/vmd/lsvmd/lsvmd.o
00:05:04.809    LINK hello_sock
00:05:05.068    LINK lsvmd
00:05:05.327    LINK spdk_nvme_identify
00:05:05.586    LINK spdk_lock
00:05:05.586    CC app/spdk_nvme_discover/discovery_aer.o
00:05:05.844    CC app/spdk_top/spdk_top.o
00:05:05.845    LINK spdk_nvme_discover
00:05:06.103    CC examples/vmd/led/led.o
00:05:06.361    LINK led
00:05:06.620    CC app/vhost/vhost.o
00:05:06.620    LINK vhost
00:05:06.620    CC app/spdk_dd/spdk_dd.o
00:05:06.879    LINK spdk_top
00:05:06.879    TEST_HEADER include/spdk/accel.h
00:05:06.879    TEST_HEADER include/spdk/accel_module.h
00:05:06.879    TEST_HEADER include/spdk/assert.h
00:05:06.879    TEST_HEADER include/spdk/barrier.h
00:05:06.879    TEST_HEADER include/spdk/base64.h
00:05:06.879    TEST_HEADER include/spdk/bdev.h
00:05:06.879    TEST_HEADER include/spdk/bdev_module.h
00:05:06.879    TEST_HEADER include/spdk/bdev_zone.h
00:05:06.879    TEST_HEADER include/spdk/bit_array.h
00:05:06.879    CC app/fio/nvme/fio_plugin.o
00:05:06.879    TEST_HEADER include/spdk/bit_pool.h
00:05:06.879    TEST_HEADER include/spdk/blob.h
00:05:06.879    TEST_HEADER include/spdk/blob_bdev.h
00:05:06.879    TEST_HEADER include/spdk/blobfs.h
00:05:06.879    TEST_HEADER include/spdk/blobfs_bdev.h
00:05:06.879    TEST_HEADER include/spdk/conf.h
00:05:06.879    TEST_HEADER include/spdk/config.h
00:05:06.879    TEST_HEADER include/spdk/cpuset.h
00:05:06.879    TEST_HEADER include/spdk/crc16.h
00:05:06.879    TEST_HEADER include/spdk/crc32.h
00:05:06.879    TEST_HEADER include/spdk/crc64.h
00:05:06.879    TEST_HEADER include/spdk/dif.h
00:05:06.879    TEST_HEADER include/spdk/dma.h
00:05:06.879    TEST_HEADER include/spdk/endian.h
00:05:06.879    TEST_HEADER include/spdk/env.h
00:05:06.879    TEST_HEADER include/spdk/env_dpdk.h
00:05:06.879    TEST_HEADER include/spdk/event.h
00:05:06.879    TEST_HEADER include/spdk/fd.h
00:05:06.879    TEST_HEADER include/spdk/fd_group.h
00:05:06.879    TEST_HEADER include/spdk/file.h
00:05:06.879    TEST_HEADER include/spdk/fsdev.h
00:05:06.879    TEST_HEADER include/spdk/fsdev_module.h
00:05:06.879    TEST_HEADER include/spdk/ftl.h
00:05:06.879    TEST_HEADER include/spdk/fuse_dispatcher.h
00:05:06.879    TEST_HEADER include/spdk/gpt_spec.h
00:05:06.879    TEST_HEADER include/spdk/hexlify.h
00:05:06.879    TEST_HEADER include/spdk/histogram_data.h
00:05:06.879    CC test/app/bdev_svc/bdev_svc.o
00:05:06.879    TEST_HEADER include/spdk/idxd.h
00:05:06.879    TEST_HEADER include/spdk/idxd_spec.h
00:05:06.879    TEST_HEADER include/spdk/init.h
00:05:06.879    TEST_HEADER include/spdk/ioat.h
00:05:06.879    TEST_HEADER include/spdk/ioat_spec.h
00:05:06.879    TEST_HEADER include/spdk/iscsi_spec.h
00:05:06.879    TEST_HEADER include/spdk/json.h
00:05:06.879    TEST_HEADER include/spdk/jsonrpc.h
00:05:06.879    TEST_HEADER include/spdk/keyring.h
00:05:06.879    TEST_HEADER include/spdk/keyring_module.h
00:05:06.879    TEST_HEADER include/spdk/likely.h
00:05:06.879    TEST_HEADER include/spdk/log.h
00:05:06.879    TEST_HEADER include/spdk/lvol.h
00:05:06.879    TEST_HEADER include/spdk/md5.h
00:05:06.879    TEST_HEADER include/spdk/memory.h
00:05:06.879    TEST_HEADER include/spdk/mmio.h
00:05:06.879    TEST_HEADER include/spdk/nbd.h
00:05:06.879    TEST_HEADER include/spdk/net.h
00:05:06.879    TEST_HEADER include/spdk/notify.h
00:05:06.879    TEST_HEADER include/spdk/nvme.h
00:05:06.879    TEST_HEADER include/spdk/nvme_intel.h
00:05:06.879    TEST_HEADER include/spdk/nvme_ocssd.h
00:05:06.879    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:05:06.879    TEST_HEADER include/spdk/nvme_spec.h
00:05:06.879    TEST_HEADER include/spdk/nvme_zns.h
00:05:06.879    TEST_HEADER include/spdk/nvmf.h
00:05:06.879    TEST_HEADER include/spdk/nvmf_cmd.h
00:05:06.879    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:05:06.879    TEST_HEADER include/spdk/nvmf_spec.h
00:05:06.879    TEST_HEADER include/spdk/nvmf_transport.h
00:05:06.879    TEST_HEADER include/spdk/opal.h
00:05:06.879    TEST_HEADER include/spdk/opal_spec.h
00:05:06.879    TEST_HEADER include/spdk/pci_ids.h
00:05:06.879    TEST_HEADER include/spdk/pipe.h
00:05:06.879    TEST_HEADER include/spdk/queue.h
00:05:06.879    TEST_HEADER include/spdk/reduce.h
00:05:06.879    TEST_HEADER include/spdk/rpc.h
00:05:06.879    TEST_HEADER include/spdk/scheduler.h
00:05:06.879    TEST_HEADER include/spdk/scsi.h
00:05:06.879    TEST_HEADER include/spdk/scsi_spec.h
00:05:06.879    TEST_HEADER include/spdk/sock.h
00:05:06.879    TEST_HEADER include/spdk/stdinc.h
00:05:06.879    TEST_HEADER include/spdk/string.h
00:05:06.879    TEST_HEADER include/spdk/thread.h
00:05:06.879    TEST_HEADER include/spdk/trace.h
00:05:06.879    TEST_HEADER include/spdk/trace_parser.h
00:05:06.879    TEST_HEADER include/spdk/tree.h
00:05:06.879    TEST_HEADER include/spdk/ublk.h
00:05:06.879    TEST_HEADER include/spdk/util.h
00:05:06.879    TEST_HEADER include/spdk/uuid.h
00:05:06.879    TEST_HEADER include/spdk/version.h
00:05:06.879    TEST_HEADER include/spdk/vfio_user_pci.h
00:05:06.879    TEST_HEADER include/spdk/vfio_user_spec.h
00:05:06.879    TEST_HEADER include/spdk/vhost.h
00:05:06.879    TEST_HEADER include/spdk/vmd.h
00:05:06.879    TEST_HEADER include/spdk/xor.h
00:05:06.879    TEST_HEADER include/spdk/zipf.h
00:05:06.879    CXX test/cpp_headers/accel.o
00:05:07.138    CXX test/cpp_headers/accel_module.o
00:05:07.138    LINK bdev_svc
00:05:07.138    CXX test/cpp_headers/assert.o
00:05:07.138    LINK spdk_dd
00:05:07.138    CC test/app/histogram_perf/histogram_perf.o
00:05:07.398    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:05:07.398    CXX test/cpp_headers/barrier.o
00:05:07.398    CC test/app/jsoncat/jsoncat.o
00:05:07.398    LINK histogram_perf
00:05:07.398    CC examples/idxd/perf/perf.o
00:05:07.398    LINK spdk_nvme
00:05:07.398    CXX test/cpp_headers/base64.o
00:05:07.398    CC test/app/stub/stub.o
00:05:07.398    LINK jsoncat
00:05:07.656    CXX test/cpp_headers/bdev.o
00:05:07.656    LINK stub
00:05:07.656    LINK nvme_fuzz
00:05:07.915    LINK idxd_perf
00:05:07.915    CXX test/cpp_headers/bdev_module.o
00:05:07.915    CXX test/cpp_headers/bdev_zone.o
00:05:08.174    CXX test/cpp_headers/bit_array.o
00:05:08.174    CXX test/cpp_headers/bit_pool.o
00:05:08.432    CC test/event/event_perf/event_perf.o
00:05:08.432    CC test/env/mem_callbacks/mem_callbacks.o
00:05:08.432    CXX test/cpp_headers/blob.o
00:05:08.432    LINK event_perf
00:05:08.691    CC examples/fsdev/hello_world/hello_fsdev.o
00:05:08.691    CXX test/cpp_headers/blob_bdev.o
00:05:08.691    CC app/fio/bdev/fio_plugin.o
00:05:08.950    LINK mem_callbacks
00:05:08.950    CC test/env/vtophys/vtophys.o
00:05:08.950    CXX test/cpp_headers/blobfs.o
00:05:08.950    LINK hello_fsdev
00:05:08.950    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:05:08.950    CXX test/cpp_headers/blobfs_bdev.o
00:05:08.950    LINK vtophys
00:05:09.209    CXX test/cpp_headers/conf.o
00:05:09.209    CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o
00:05:09.209    CC test/event/reactor/reactor.o
00:05:09.468    LINK spdk_bdev
00:05:09.468    CXX test/cpp_headers/config.o
00:05:09.468    CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o
00:05:09.468    CXX test/cpp_headers/cpuset.o
00:05:09.468    LINK reactor
00:05:09.727    CXX test/cpp_headers/crc16.o
00:05:09.727    CXX test/cpp_headers/crc32.o
00:05:09.986    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:05:09.986    LINK vhost_fuzz
00:05:09.986    CXX test/cpp_headers/crc64.o
00:05:09.986    LINK env_dpdk_post_init
00:05:10.245    CC test/nvme/aer/aer.o
00:05:10.245    CXX test/cpp_headers/dif.o
00:05:10.245    CC test/event/reactor_perf/reactor_perf.o
00:05:10.504    CXX test/cpp_headers/dma.o
00:05:10.504    CC test/event/app_repeat/app_repeat.o
00:05:10.504    LINK reactor_perf
00:05:10.504    LINK aer
00:05:10.504    CXX test/cpp_headers/endian.o
00:05:10.504    CXX test/cpp_headers/env.o
00:05:10.762    LINK app_repeat
00:05:10.763    CXX test/cpp_headers/env_dpdk.o
00:05:10.763    CXX test/cpp_headers/event.o
00:05:10.763    CC test/rpc_client/rpc_client_test.o
00:05:11.022    CXX test/cpp_headers/fd.o
00:05:11.022    LINK rpc_client_test
00:05:11.022    CC test/event/scheduler/scheduler.o
00:05:11.022    LINK iscsi_fuzz
00:05:11.022    CXX test/cpp_headers/fd_group.o
00:05:11.281    CXX test/cpp_headers/file.o
00:05:11.281    CC test/env/memory/memory_ut.o
00:05:11.281    LINK scheduler
00:05:11.281    CC test/env/pci/pci_ut.o
00:05:11.540    CXX test/cpp_headers/fsdev.o
00:05:11.540    CXX test/cpp_headers/fsdev_module.o
00:05:11.798    CXX test/cpp_headers/ftl.o
00:05:11.798    LINK pci_ut
00:05:11.798    CC test/nvme/reset/reset.o
00:05:11.798    CXX test/cpp_headers/fuse_dispatcher.o
00:05:11.798    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:05:12.057    CC examples/accel/perf/accel_perf.o
00:05:12.057    CXX test/cpp_headers/gpt_spec.o
00:05:12.057    LINK reset
00:05:12.057    LINK histogram_ut
00:05:12.057    CXX test/cpp_headers/hexlify.o
00:05:12.316    CC examples/blob/hello_world/hello_blob.o
00:05:12.316    CXX test/cpp_headers/histogram_data.o
00:05:12.316    CC examples/nvme/hello_world/hello_world.o
00:05:12.316    LINK memory_ut
00:05:12.575    CC examples/blob/cli/blobcli.o
00:05:12.575    CXX test/cpp_headers/idxd.o
00:05:12.575    LINK accel_perf
00:05:12.575    CC test/unit/lib/log/log.c/log_ut.o
00:05:12.575    LINK hello_blob
00:05:12.575    LINK hello_world
00:05:12.834    CXX test/cpp_headers/idxd_spec.o
00:05:12.834    CC test/unit/lib/rdma/common.c/common_ut.o
00:05:12.834    CXX test/cpp_headers/init.o
00:05:12.834    CXX test/cpp_headers/ioat.o
00:05:12.834    LINK log_ut
00:05:13.093    LINK blobcli
00:05:13.093    CXX test/cpp_headers/ioat_spec.o
00:05:13.351    CC test/nvme/sgl/sgl.o
00:05:13.351    CC test/nvme/e2edp/nvme_dp.o
00:05:13.351    CXX test/cpp_headers/iscsi_spec.o
00:05:13.351    CC test/unit/lib/util/base64.c/base64_ut.o
00:05:13.610    LINK sgl
00:05:13.610    CXX test/cpp_headers/json.o
00:05:13.610    LINK common_ut
00:05:13.610    LINK nvme_dp
00:05:13.610    LINK base64_ut
00:05:13.610    CXX test/cpp_headers/jsonrpc.o
00:05:13.610    CC test/unit/lib/dma/dma.c/dma_ut.o
00:05:13.869    CC examples/nvme/reconnect/reconnect.o
00:05:13.869    CXX test/cpp_headers/keyring.o
00:05:14.129    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:05:14.129    CC examples/bdev/hello_world/hello_bdev.o
00:05:14.129    CXX test/cpp_headers/keyring_module.o
00:05:14.129    LINK reconnect
00:05:14.129    CXX test/cpp_headers/likely.o
00:05:14.129    LINK hello_bdev
00:05:14.388    CXX test/cpp_headers/log.o
00:05:14.648    CXX test/cpp_headers/lvol.o
00:05:14.648    LINK dma_ut
00:05:14.648    LINK bit_array_ut
00:05:14.648    CXX test/cpp_headers/md5.o
00:05:14.648    CXX test/cpp_headers/memory.o
00:05:14.648    CXX test/cpp_headers/mmio.o
00:05:14.907    CXX test/cpp_headers/nbd.o
00:05:14.907    CC test/nvme/overhead/overhead.o
00:05:14.907    CC test/nvme/startup/startup.o
00:05:14.907    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:05:14.907    CC test/nvme/err_injection/err_injection.o
00:05:14.907    CXX test/cpp_headers/net.o
00:05:14.907    CC test/nvme/reserve/reserve.o
00:05:15.166    CXX test/cpp_headers/notify.o
00:05:15.166    LINK startup
00:05:15.166    LINK overhead
00:05:15.166    LINK err_injection
00:05:15.166    LINK cpuset_ut
00:05:15.166    LINK reserve
00:05:15.166    CXX test/cpp_headers/nvme.o
00:05:15.427    CC examples/nvme/nvme_manage/nvme_manage.o
00:05:15.427    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:05:15.427    CXX test/cpp_headers/nvme_intel.o
00:05:15.685    CXX test/cpp_headers/nvme_ocssd.o
00:05:15.685    LINK crc16_ut
00:05:15.685    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:05:15.685    CXX test/cpp_headers/nvme_ocssd_spec.o
00:05:15.685    CXX test/cpp_headers/nvme_spec.o
00:05:15.944    LINK crc32_ieee_ut
00:05:15.944    LINK nvme_manage
00:05:15.944    CXX test/cpp_headers/nvme_zns.o
00:05:15.944    CXX test/cpp_headers/nvmf.o
00:05:15.944    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:05:16.202    CXX test/cpp_headers/nvmf_cmd.o
00:05:16.202    LINK crc32c_ut
00:05:16.202    CXX test/cpp_headers/nvmf_fc_spec.o
00:05:16.202    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:05:16.460    CXX test/cpp_headers/nvmf_spec.o
00:05:16.460    CXX test/cpp_headers/nvmf_transport.o
00:05:16.460    CXX test/cpp_headers/opal.o
00:05:16.460    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:05:16.460    CC test/nvme/simple_copy/simple_copy.o
00:05:16.460    CC test/unit/lib/util/dif.c/dif_ut.o
00:05:16.460    CC examples/bdev/bdevperf/bdevperf.o
00:05:16.460    CC test/unit/lib/util/file.c/file_ut.o
00:05:16.719    CXX test/cpp_headers/opal_spec.o
00:05:16.719    LINK crc64_ut
00:05:16.719    CC test/unit/lib/util/iov.c/iov_ut.o
00:05:16.719    CXX test/cpp_headers/pci_ids.o
00:05:16.719    LINK file_ut
00:05:16.719    LINK simple_copy
00:05:16.978    CC test/unit/lib/util/math.c/math_ut.o
00:05:16.978    CC test/unit/lib/util/net.c/net_ut.o
00:05:17.237    CC examples/nvme/arbitration/arbitration.o
00:05:17.237    CXX test/cpp_headers/pipe.o
00:05:17.237    LINK net_ut
00:05:17.237    LINK math_ut
00:05:17.237    CC examples/nvme/hotplug/hotplug.o
00:05:17.237    LINK iov_ut
00:05:17.237    LINK ioat_ut
00:05:17.495    LINK bdevperf
00:05:17.495    CXX test/cpp_headers/queue.o
00:05:17.495    CXX test/cpp_headers/reduce.o
00:05:17.495    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:05:17.495    CC test/unit/lib/util/string.c/string_ut.o
00:05:17.495    LINK hotplug
00:05:17.495    CC examples/nvme/cmb_copy/cmb_copy.o
00:05:17.495    CC examples/nvme/abort/abort.o
00:05:17.495    CXX test/cpp_headers/rpc.o
00:05:17.753    LINK arbitration
00:05:17.753    LINK dif_ut
00:05:17.753    LINK cmb_copy
00:05:17.754    CXX test/cpp_headers/scheduler.o
00:05:18.011    CC test/nvme/connect_stress/connect_stress.o
00:05:18.011    LINK string_ut
00:05:18.011    CXX test/cpp_headers/scsi.o
00:05:18.011    LINK abort
00:05:18.011    CXX test/cpp_headers/scsi_spec.o
00:05:18.011    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:05:18.011    LINK connect_stress
00:05:18.270    CXX test/cpp_headers/sock.o
00:05:18.270    LINK pmr_persistence
00:05:18.270    CC test/unit/lib/util/xor.c/xor_ut.o
00:05:18.270    LINK pipe_ut
00:05:18.270    CXX test/cpp_headers/stdinc.o
00:05:18.529    CXX test/cpp_headers/string.o
00:05:18.806    CC test/nvme/boot_partition/boot_partition.o
00:05:18.806    CXX test/cpp_headers/thread.o
00:05:18.806    LINK xor_ut
00:05:18.806    CC test/nvme/compliance/nvme_compliance.o
00:05:18.806    CC test/nvme/fused_ordering/fused_ordering.o
00:05:19.096    CC test/nvme/doorbell_aers/doorbell_aers.o
00:05:19.096    LINK boot_partition
00:05:19.096    CXX test/cpp_headers/trace.o
00:05:19.096    CC test/unit/lib/util/fd_group.c/fd_group_ut.o
00:05:19.096    LINK fused_ordering
00:05:19.363    CXX test/cpp_headers/trace_parser.o
00:05:19.363    LINK doorbell_aers
00:05:19.363    LINK nvme_compliance
00:05:19.363    CXX test/cpp_headers/tree.o
00:05:19.363    CC test/nvme/fdp/fdp.o
00:05:19.363    CC test/nvme/cuse/cuse.o
00:05:19.363    CXX test/cpp_headers/ublk.o
00:05:19.621    LINK fd_group_ut
00:05:19.621    CXX test/cpp_headers/util.o
00:05:19.621    LINK fdp
00:05:19.621    CC test/accel/dif/dif.o
00:05:19.879    CXX test/cpp_headers/uuid.o
00:05:19.879    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:05:20.138    CXX test/cpp_headers/version.o
00:05:20.138    CXX test/cpp_headers/vfio_user_pci.o
00:05:20.138    CXX test/cpp_headers/vfio_user_spec.o
00:05:20.396    CXX test/cpp_headers/vhost.o
00:05:20.396    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:05:20.396    CXX test/cpp_headers/vmd.o
00:05:20.396    CXX test/cpp_headers/xor.o
00:05:20.396    CXX test/cpp_headers/zipf.o
00:05:20.396    LINK dif
00:05:20.654    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:05:20.654    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:05:20.654    CC test/blobfs/mkfs/mkfs.o
00:05:20.654    LINK cuse
00:05:20.654    CC test/lvol/esnap/esnap.o
00:05:20.912    LINK mkfs
00:05:20.913    LINK pci_event_ut
00:05:20.913    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:05:20.913    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:05:21.171    LINK json_util_ut
00:05:21.171    CC examples/nvmf/nvmf/nvmf.o
00:05:21.429    LINK idxd_user_ut
00:05:21.687    LINK nvmf
00:05:21.687    LINK json_write_ut
00:05:22.254    LINK idxd_ut
00:05:22.254    LINK json_parse_ut
00:05:22.822    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:05:23.080    LINK jsonrpc_server_ut
00:05:23.339    CC test/bdev/bdevio/bdevio.o
00:05:23.598    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:05:23.857    LINK bdevio
00:05:24.792    LINK rpc_ut
00:05:25.051    CC test/unit/lib/sock/sock.c/sock_ut.o
00:05:25.051    CC test/unit/lib/sock/posix.c/posix_ut.o
00:05:25.051    CC test/unit/lib/thread/thread.c/thread_ut.o
00:05:25.051    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:05:25.051    CC test/unit/lib/notify/notify.c/notify_ut.o
00:05:25.051    CC test/unit/lib/keyring/keyring.c/keyring_ut.o
00:05:25.617    LINK keyring_ut
00:05:25.875    LINK notify_ut
00:05:26.441    LINK iobuf_ut
00:05:26.441    LINK posix_ut
00:05:26.700    LINK esnap
00:05:26.958    LINK sock_ut
00:05:27.216    LINK thread_ut
00:05:27.474    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:05:27.474    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:05:28.409    LINK nvme_ns_ut
00:05:28.667    LINK nvme_ctrlr_ocssd_cmd_ut
00:05:28.667    LINK nvme_poll_group_ut
00:05:28.667    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:05:28.667    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:05:28.926    LINK nvme_ctrlr_cmd_ut
00:05:28.926    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:05:28.926    LINK nvme_qpair_ut
00:05:29.184    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:05:29.184    LINK nvme_ut
00:05:29.184    LINK nvme_ns_ocssd_cmd_ut
00:05:29.184    LINK nvme_quirks_ut
00:05:29.442    CC test/unit/lib/accel/accel.c/accel_ut.o
00:05:29.442    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:05:29.442    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:05:29.442    LINK nvme_ns_cmd_ut
00:05:29.442    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:05:29.700    LINK nvme_pcie_ut
00:05:29.700    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:05:29.959    LINK nvme_transport_ut
00:05:29.959    CC test/unit/lib/blob/blob.c/blob_ut.o
00:05:30.217    LINK nvme_io_msg_ut
00:05:30.217    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:05:30.217    LINK blob_bdev_ut
00:05:30.475    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:05:30.475    LINK subsystem_ut
00:05:30.733    CC test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut.o
00:05:30.733    CC test/unit/lib/init/rpc.c/rpc_ut.o
00:05:30.991    LINK nvme_fabric_ut
00:05:30.991    LINK nvme_ctrlr_ut
00:05:30.991    LINK nvme_pcie_common_ut
00:05:31.249    CC test/unit/lib/fsdev/fsdev.c/fsdev_ut.o
00:05:31.249    LINK nvme_opal_ut
00:05:31.249    LINK rpc_ut
00:05:31.816    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:05:31.816    CC test/unit/lib/event/app.c/app_ut.o
00:05:31.816    LINK nvme_tcp_ut
00:05:32.384    LINK accel_ut
00:05:32.384    LINK fsdev_ut
00:05:32.642    LINK app_ut
00:05:32.642    LINK nvme_cuse_ut
00:05:32.904    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:05:32.904    CC test/unit/lib/bdev/part.c/part_ut.o
00:05:32.904    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:05:32.904    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:05:32.904    LINK reactor_ut
00:05:32.904    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:05:32.904    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:05:33.163    LINK nvme_rdma_ut
00:05:33.163    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:05:33.163    LINK scsi_nvme_ut
00:05:33.422    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:05:33.422    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:05:33.422    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:05:33.680    LINK gpt_ut
00:05:33.680    LINK bdev_zone_ut
00:05:33.941    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:05:33.941    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:05:34.509    LINK vbdev_zone_block_ut
00:05:34.509    LINK bdev_raid_sb_ut
00:05:34.768    LINK vbdev_lvol_ut
00:05:34.768    LINK concat_ut
00:05:34.768    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:05:35.027    CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o
00:05:35.595    LINK bdev_raid_ut
00:05:35.854    LINK raid1_ut
00:05:35.854    LINK raid0_ut
00:05:36.789    LINK part_ut
00:05:37.048    LINK bdev_ut
00:05:37.983    LINK blob_ut
00:05:38.242    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:05:38.242    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:05:38.242    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:05:38.242    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:05:38.242    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:05:38.500    LINK blobfs_bdev_ut
00:05:38.500    LINK tree_ut
00:05:38.500    LINK bdev_ut
00:05:39.436    LINK bdev_nvme_ut
00:05:39.694    LINK blobfs_sync_ut
00:05:39.694    LINK blobfs_async_ut
00:05:39.953    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:05:39.953    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:05:39.953    CC test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut.o
00:05:39.953    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:05:39.953    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:05:39.953    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:05:39.953    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:05:39.953    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:05:40.211    CC test/unit/lib/ftl/ftl_band.c/ftl_band_ut.o
00:05:40.469    LINK dev_ut
00:05:40.469    LINK scsi_ut
00:05:40.469    LINK ftl_l2p_ut
00:05:40.469    LINK scsi_pr_ut
00:05:40.469    LINK lvol_ut
00:05:40.728    CC test/unit/lib/ftl/ftl_io.c/ftl_io_ut.o
00:05:40.728    CC test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut.o
00:05:40.728    CC test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut.o
00:05:40.728    CC test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut.o
00:05:40.986    LINK lun_ut
00:05:40.986    CC test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut.o
00:05:40.986    LINK ftl_bitmap_ut
00:05:41.244    CC test/unit/lib/ftl/ftl_sb/ftl_sb_ut.o
00:05:41.244    CC test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut.o
00:05:41.244    LINK scsi_bdev_ut
00:05:41.503    LINK ftl_mempool_ut
00:05:41.503    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:05:41.761    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:05:41.761    LINK ftl_io_ut
00:05:41.761    LINK ftl_mngt_ut
00:05:42.020    LINK ftl_band_ut
00:05:42.020    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:05:42.020    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:05:42.020    LINK ftl_p2l_ut
00:05:42.287    CC test/unit/lib/nvmf/auth.c/auth_ut.o
00:05:42.548    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:05:42.807    LINK ftl_layout_upgrade_ut
00:05:42.807    LINK ftl_sb_ut
00:05:43.066    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:05:43.324    CC test/unit/lib/vhost/vhost.c/vhost_ut.o
00:05:43.324    LINK ctrlr_bdev_ut
00:05:43.583    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:05:43.583    LINK ctrlr_ut
00:05:43.583    LINK init_grp_ut
00:05:43.842    LINK nvmf_ut
00:05:43.842    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:05:43.842    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:05:43.842    LINK ctrlr_discovery_ut
00:05:44.101    CC test/unit/lib/iscsi/param.c/param_ut.o
00:05:44.101    LINK subsystem_ut
00:05:44.101    LINK auth_ut
00:05:44.101    LINK conn_ut
00:05:44.360    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:05:44.619    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:05:44.619    LINK tcp_ut
00:05:44.619    LINK param_ut
00:05:45.555    LINK portal_grp_ut
00:05:45.813    LINK tgt_node_ut
00:05:46.072    LINK vhost_ut
00:05:46.344    LINK iscsi_ut
00:05:47.293    LINK rdma_ut
00:05:47.293    LINK transport_ut
00:05:47.551  ************************************
00:05:47.551  END TEST unittest_build
00:05:47.551  ************************************
00:05:47.551  
00:05:47.551  real	2m1.356s
00:05:47.551  user	10m11.789s
00:05:47.551  sys	1m43.099s
00:05:47.551   04:56:01 unittest_build -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:05:47.551   04:56:01 unittest_build -- common/autotest_common.sh@10 -- $ set +x
00:05:47.551   04:56:01  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:05:47.551   04:56:01  -- pm/common@29 -- $ signal_monitor_resources TERM
00:05:47.551   04:56:01  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:05:47.551   04:56:01  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:47.551   04:56:01  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]]
00:05:47.551   04:56:01  -- pm/common@44 -- $ pid=2337
00:05:47.551   04:56:01  -- pm/common@50 -- $ kill -TERM 2337
00:05:47.551   04:56:01  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:05:47.551   04:56:01  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:05:47.551   04:56:01  -- pm/common@44 -- $ pid=2339
00:05:47.551   04:56:01  -- pm/common@50 -- $ kill -TERM 2339
00:05:47.551   04:56:01  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:05:47.551   04:56:01  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:05:47.810    04:56:01  -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:05:47.810     04:56:01  -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:05:47.810     04:56:01  -- common/autotest_common.sh@1693 -- # lcov --version
00:05:47.810    04:56:01  -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:05:47.810    04:56:01  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:47.810    04:56:01  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:47.810    04:56:01  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:47.810    04:56:01  -- scripts/common.sh@336 -- # IFS=.-:
00:05:47.810    04:56:01  -- scripts/common.sh@336 -- # read -ra ver1
00:05:47.810    04:56:01  -- scripts/common.sh@337 -- # IFS=.-:
00:05:47.810    04:56:01  -- scripts/common.sh@337 -- # read -ra ver2
00:05:47.810    04:56:01  -- scripts/common.sh@338 -- # local 'op=<'
00:05:47.810    04:56:01  -- scripts/common.sh@340 -- # ver1_l=2
00:05:47.810    04:56:01  -- scripts/common.sh@341 -- # ver2_l=1
00:05:47.810    04:56:01  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:47.810    04:56:01  -- scripts/common.sh@344 -- # case "$op" in
00:05:47.810    04:56:01  -- scripts/common.sh@345 -- # : 1
00:05:47.810    04:56:01  -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:47.810    04:56:01  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:47.810     04:56:01  -- scripts/common.sh@365 -- # decimal 1
00:05:47.810     04:56:01  -- scripts/common.sh@353 -- # local d=1
00:05:47.810     04:56:01  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:47.810     04:56:01  -- scripts/common.sh@355 -- # echo 1
00:05:47.810    04:56:01  -- scripts/common.sh@365 -- # ver1[v]=1
00:05:47.810     04:56:01  -- scripts/common.sh@366 -- # decimal 2
00:05:47.810     04:56:01  -- scripts/common.sh@353 -- # local d=2
00:05:47.810     04:56:01  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:47.810     04:56:01  -- scripts/common.sh@355 -- # echo 2
00:05:47.810    04:56:01  -- scripts/common.sh@366 -- # ver2[v]=2
00:05:47.810    04:56:01  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:47.810    04:56:01  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:47.810    04:56:01  -- scripts/common.sh@368 -- # return 0
00:05:47.810    04:56:01  -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:47.810    04:56:01  -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:05:47.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:47.810  		--rc genhtml_branch_coverage=1
00:05:47.810  		--rc genhtml_function_coverage=1
00:05:47.810  		--rc genhtml_legend=1
00:05:47.810  		--rc geninfo_all_blocks=1
00:05:47.810  		--rc geninfo_unexecuted_blocks=1
00:05:47.810  		
00:05:47.810  		'
00:05:47.810    04:56:01  -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:05:47.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:47.810  		--rc genhtml_branch_coverage=1
00:05:47.810  		--rc genhtml_function_coverage=1
00:05:47.810  		--rc genhtml_legend=1
00:05:47.810  		--rc geninfo_all_blocks=1
00:05:47.810  		--rc geninfo_unexecuted_blocks=1
00:05:47.810  		
00:05:47.810  		'
00:05:47.810    04:56:01  -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:05:47.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:47.810  		--rc genhtml_branch_coverage=1
00:05:47.810  		--rc genhtml_function_coverage=1
00:05:47.810  		--rc genhtml_legend=1
00:05:47.810  		--rc geninfo_all_blocks=1
00:05:47.810  		--rc geninfo_unexecuted_blocks=1
00:05:47.810  		
00:05:47.810  		'
00:05:47.810    04:56:01  -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:05:47.810  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:47.810  		--rc genhtml_branch_coverage=1
00:05:47.810  		--rc genhtml_function_coverage=1
00:05:47.810  		--rc genhtml_legend=1
00:05:47.810  		--rc geninfo_all_blocks=1
00:05:47.810  		--rc geninfo_unexecuted_blocks=1
00:05:47.810  		
00:05:47.810  		'
00:05:47.810   04:56:01  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:05:47.810     04:56:01  -- nvmf/common.sh@7 -- # uname -s
00:05:47.810    04:56:01  -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:05:47.810    04:56:01  -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:05:47.810    04:56:01  -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:05:47.811    04:56:01  -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:05:47.811    04:56:01  -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:05:47.811    04:56:01  -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:05:47.811    04:56:01  -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:05:47.811    04:56:01  -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:05:47.811    04:56:01  -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:05:47.811     04:56:01  -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:05:47.811    04:56:01  -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:82610a29-e1bf-4bf9-8775-8a6832215bdd
00:05:47.811    04:56:01  -- nvmf/common.sh@18 -- # NVME_HOSTID=82610a29-e1bf-4bf9-8775-8a6832215bdd
00:05:47.811    04:56:01  -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:05:47.811    04:56:01  -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:05:47.811    04:56:01  -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:05:47.811    04:56:01  -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:05:47.811    04:56:01  -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:05:47.811     04:56:01  -- scripts/common.sh@15 -- # shopt -s extglob
00:05:47.811     04:56:01  -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:05:47.811     04:56:01  -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:05:47.811     04:56:01  -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:05:47.811      04:56:01  -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:47.811      04:56:01  -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:47.811      04:56:01  -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:47.811      04:56:01  -- paths/export.sh@5 -- # export PATH
00:05:47.811      04:56:01  -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:05:47.811    04:56:01  -- nvmf/common.sh@51 -- # : 0
00:05:47.811    04:56:01  -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:05:47.811    04:56:01  -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:05:47.811    04:56:01  -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:05:47.811    04:56:01  -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:05:47.811    04:56:01  -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:05:47.811    04:56:01  -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:05:47.811  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:05:47.811    04:56:01  -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:05:47.811    04:56:01  -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:05:47.811    04:56:01  -- nvmf/common.sh@55 -- # have_pci_nics=0
00:05:47.811   04:56:01  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:05:47.811    04:56:01  -- spdk/autotest.sh@32 -- # uname -s
00:05:47.811   04:56:01  -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']'
00:05:47.811   04:56:01  -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/share/apport/apport -p%p -s%s -c%c -d%d -P%P -u%u -g%g -- %E'
00:05:47.811   04:56:01  -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:47.811   04:56:01  -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t'
00:05:47.811   04:56:01  -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps
00:05:47.811   04:56:01  -- spdk/autotest.sh@44 -- # modprobe nbd
00:05:47.811    04:56:01  -- spdk/autotest.sh@46 -- # type -P udevadm
00:05:47.811   04:56:01  -- spdk/autotest.sh@46 -- # udevadm=/usr/bin/udevadm
00:05:47.811   04:56:01  -- spdk/autotest.sh@48 -- # udevadm_pid=116705
00:05:47.811   04:56:01  -- spdk/autotest.sh@47 -- # /usr/bin/udevadm monitor --property
00:05:47.811   04:56:01  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:05:47.811   04:56:01  -- pm/common@17 -- # local monitor
00:05:47.811   04:56:01  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:47.811   04:56:01  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:05:47.811   04:56:01  -- pm/common@25 -- # sleep 1
00:05:47.811    04:56:01  -- pm/common@21 -- # date +%s
00:05:47.811    04:56:01  -- pm/common@21 -- # date +%s
00:05:47.811   04:56:01  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732078561
00:05:47.811   04:56:01  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732078561
00:05:47.811  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732078561_collect-vmstat.pm.log
00:05:47.811  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732078561_collect-cpu-load.pm.log
00:05:49.188   04:56:02  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:05:49.188   04:56:02  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:05:49.188   04:56:02  -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:49.188   04:56:02  -- common/autotest_common.sh@10 -- # set +x
00:05:49.188   04:56:02  -- spdk/autotest.sh@59 -- # create_test_list
00:05:49.188   04:56:02  -- common/autotest_common.sh@752 -- # xtrace_disable
00:05:49.188   04:56:02  -- common/autotest_common.sh@10 -- # set +x
00:05:49.188     04:56:02  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:05:49.188    04:56:02  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:05:49.188   04:56:02  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:05:49.188   04:56:02  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:05:49.188   04:56:02  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:05:49.188   04:56:02  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:05:49.188    04:56:02  -- common/autotest_common.sh@1457 -- # uname
00:05:49.188   04:56:02  -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']'
00:05:49.188   04:56:02  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:05:49.188    04:56:02  -- common/autotest_common.sh@1477 -- # uname
00:05:49.188   04:56:02  -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]]
00:05:49.188   04:56:02  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:05:49.188   04:56:02  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version
00:05:49.188  lcov: LCOV version 1.15
00:05:49.188   04:56:02  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:05:55.752  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:05:55.752  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:06:34.465   04:56:47  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:06:34.465   04:56:47  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:34.465   04:56:47  -- common/autotest_common.sh@10 -- # set +x
00:06:34.465   04:56:47  -- spdk/autotest.sh@78 -- # rm -f
00:06:34.465   04:56:47  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:34.465  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:06:34.465  0000:00:10.0 (1b36 0010): Already using the nvme driver
00:06:34.465   04:56:48  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:06:34.465   04:56:48  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:06:34.465   04:56:48  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:06:34.465   04:56:48  -- common/autotest_common.sh@1658 -- # local nvme bdf
00:06:34.465   04:56:48  -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:06:34.465   04:56:48  -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:06:34.465   04:56:48  -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:06:34.465   04:56:48  -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:06:34.465   04:56:48  -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:06:34.465   04:56:48  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:06:34.465   04:56:48  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:06:34.465   04:56:48  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:06:34.465   04:56:48  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1
00:06:34.465   04:56:48  -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt
00:06:34.465   04:56:48  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1
00:06:34.465  No valid GPT data, bailing
00:06:34.465    04:56:48  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1
00:06:34.465   04:56:48  -- scripts/common.sh@394 -- # pt=
00:06:34.465   04:56:48  -- scripts/common.sh@395 -- # return 1
00:06:34.465   04:56:48  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1
00:06:34.465  1+0 records in
00:06:34.465  1+0 records out
00:06:34.465  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496346 s, 211 MB/s
00:06:34.465   04:56:48  -- spdk/autotest.sh@105 -- # sync
00:06:34.465   04:56:48  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:06:34.465   04:56:48  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:06:34.465    04:56:48  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:06:35.838    04:56:49  -- spdk/autotest.sh@111 -- # uname -s
00:06:35.838   04:56:49  -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]]
00:06:35.838   04:56:49  -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]]
00:06:35.838   04:56:49  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:06:36.097  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:06:36.097  Hugepages
00:06:36.097  node     hugesize     free /  total
00:06:36.097  node0   1048576kB        0 /      0
00:06:36.097  node0      2048kB        0 /      0
00:06:36.097  
00:06:36.097  Type                      BDF             Vendor Device NUMA    Driver           Device     Block devices
00:06:36.097  virtio                    0000:00:03.0    1af4   1001   unknown virtio-pci       -          vda
00:06:36.356  NVMe                      0000:00:10.0    1b36   0010   unknown nvme             nvme0      nvme0n1
00:06:36.356    04:56:50  -- spdk/autotest.sh@117 -- # uname -s
00:06:36.356   04:56:50  -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]]
00:06:36.356   04:56:50  -- spdk/autotest.sh@119 -- # nvme_namespace_revert
00:06:36.356   04:56:50  -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:36.615  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:06:36.874  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:37.808   04:56:51  -- common/autotest_common.sh@1517 -- # sleep 1
00:06:38.744   04:56:52  -- common/autotest_common.sh@1518 -- # bdfs=()
00:06:38.744   04:56:52  -- common/autotest_common.sh@1518 -- # local bdfs
00:06:38.744   04:56:52  -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs))
00:06:38.744    04:56:52  -- common/autotest_common.sh@1520 -- # get_nvme_bdfs
00:06:38.744    04:56:52  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:38.744    04:56:52  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:38.744    04:56:52  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:38.744     04:56:52  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:38.744     04:56:52  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:38.744    04:56:52  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:38.744    04:56:52  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:06:38.744   04:56:52  -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:06:39.003  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:06:39.262  Waiting for block devices as requested
00:06:39.262  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:06:39.262   04:56:53  -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}"
00:06:39.262    04:56:53  -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0
00:06:39.262     04:56:53  -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0
00:06:39.262     04:56:53  -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme
00:06:39.262    04:56:53  -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:06:39.262    04:56:53  -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0 ]]
00:06:39.262     04:56:53  -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme0
00:06:39.262    04:56:53  -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0
00:06:39.262   04:56:53  -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0
00:06:39.262   04:56:53  -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]]
00:06:39.262    04:56:53  -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0
00:06:39.262    04:56:53  -- common/autotest_common.sh@1531 -- # grep oacs
00:06:39.262    04:56:53  -- common/autotest_common.sh@1531 -- # cut -d: -f2
00:06:39.262   04:56:53  -- common/autotest_common.sh@1531 -- # oacs=' 0x12a'
00:06:39.262   04:56:53  -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8
00:06:39.262   04:56:53  -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]]
00:06:39.262    04:56:53  -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0
00:06:39.262    04:56:53  -- common/autotest_common.sh@1540 -- # grep unvmcap
00:06:39.262    04:56:53  -- common/autotest_common.sh@1540 -- # cut -d: -f2
00:06:39.262   04:56:53  -- common/autotest_common.sh@1540 -- # unvmcap=' 0'
00:06:39.262   04:56:53  -- common/autotest_common.sh@1541 -- # [[  0 -eq 0 ]]
00:06:39.262   04:56:53  -- common/autotest_common.sh@1543 -- # continue
00:06:39.262   04:56:53  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:06:39.262   04:56:53  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:39.262   04:56:53  -- common/autotest_common.sh@10 -- # set +x
00:06:39.262   04:56:53  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:06:39.262   04:56:53  -- common/autotest_common.sh@726 -- # xtrace_disable
00:06:39.262   04:56:53  -- common/autotest_common.sh@10 -- # set +x
00:06:39.262   04:56:53  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:06:39.829  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:06:39.829  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:06:40.766   04:56:54  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:06:40.766   04:56:54  -- common/autotest_common.sh@732 -- # xtrace_disable
00:06:40.766   04:56:54  -- common/autotest_common.sh@10 -- # set +x
00:06:40.766   04:56:54  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:06:40.766   04:56:54  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:06:40.766    04:56:54  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:06:40.766    04:56:54  -- common/autotest_common.sh@1563 -- # bdfs=()
00:06:40.766    04:56:54  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:06:40.766    04:56:54  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:06:40.766    04:56:54  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:06:40.766     04:56:54  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:06:40.766     04:56:54  -- common/autotest_common.sh@1498 -- # bdfs=()
00:06:40.766     04:56:54  -- common/autotest_common.sh@1498 -- # local bdfs
00:06:40.766     04:56:54  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:06:40.766      04:56:54  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:06:40.766      04:56:54  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:06:40.766     04:56:54  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:06:40.766     04:56:54  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:06:41.025    04:56:54  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:06:41.026     04:56:54  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:06:41.026    04:56:54  -- common/autotest_common.sh@1566 -- # device=0x0010
00:06:41.026    04:56:54  -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]]
00:06:41.026    04:56:54  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:06:41.026    04:56:54  -- common/autotest_common.sh@1572 -- # return 0
00:06:41.026   04:56:54  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:06:41.026   04:56:54  -- common/autotest_common.sh@1580 -- # return 0
00:06:41.026   04:56:54  -- spdk/autotest.sh@137 -- # '[' 1 -eq 1 ']'
00:06:41.026   04:56:54  -- spdk/autotest.sh@138 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:41.026   04:56:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:41.026   04:56:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:41.026   04:56:54  -- common/autotest_common.sh@10 -- # set +x
00:06:41.026  ************************************
00:06:41.026  START TEST unittest
00:06:41.026  ************************************
00:06:41.026   04:56:54 unittest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:41.026  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:41.026  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:06:41.026  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:06:41.026  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:06:41.026  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:06:41.026  + rootdir=/home/vagrant/spdk_repo/spdk
00:06:41.026  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:06:41.026  ++ rpc_py=rpc_cmd
00:06:41.026  ++ set -e
00:06:41.026  ++ shopt -s nullglob
00:06:41.026  ++ shopt -s extglob
00:06:41.026  ++ shopt -s inherit_errexit
00:06:41.026  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:06:41.026  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:06:41.026  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:06:41.026  +++ CONFIG_WPDK_DIR=
00:06:41.026  +++ CONFIG_ASAN=y
00:06:41.026  +++ CONFIG_VBDEV_COMPRESS=n
00:06:41.026  +++ CONFIG_HAVE_EXECINFO_H=y
00:06:41.026  +++ CONFIG_USDT=n
00:06:41.026  +++ CONFIG_CUSTOMOCF=n
00:06:41.026  +++ CONFIG_PREFIX=/usr/local
00:06:41.026  +++ CONFIG_RBD=n
00:06:41.026  +++ CONFIG_LIBDIR=
00:06:41.026  +++ CONFIG_IDXD=y
00:06:41.026  +++ CONFIG_NVME_CUSE=y
00:06:41.026  +++ CONFIG_SMA=n
00:06:41.026  +++ CONFIG_VTUNE=n
00:06:41.026  +++ CONFIG_TSAN=n
00:06:41.026  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:06:41.026  +++ CONFIG_VFIO_USER_DIR=
00:06:41.026  +++ CONFIG_MAX_NUMA_NODES=1
00:06:41.026  +++ CONFIG_PGO_CAPTURE=n
00:06:41.026  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:06:41.026  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:06:41.026  +++ CONFIG_LTO=n
00:06:41.026  +++ CONFIG_ISCSI_INITIATOR=y
00:06:41.026  +++ CONFIG_CET=n
00:06:41.026  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:06:41.026  +++ CONFIG_OCF_PATH=
00:06:41.026  +++ CONFIG_RDMA_SET_TOS=y
00:06:41.026  +++ CONFIG_AIO_FSDEV=y
00:06:41.026  +++ CONFIG_HAVE_ARC4RANDOM=n
00:06:41.026  +++ CONFIG_HAVE_LIBARCHIVE=n
00:06:41.026  +++ CONFIG_UBLK=n
00:06:41.026  +++ CONFIG_ISAL_CRYPTO=y
00:06:41.026  +++ CONFIG_OPENSSL_PATH=
00:06:41.026  +++ CONFIG_OCF=n
00:06:41.026  +++ CONFIG_FUSE=n
00:06:41.026  +++ CONFIG_VTUNE_DIR=
00:06:41.026  +++ CONFIG_FUZZER_LIB=
00:06:41.026  +++ CONFIG_FUZZER=n
00:06:41.026  +++ CONFIG_FSDEV=y
00:06:41.026  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:06:41.026  +++ CONFIG_CRYPTO=n
00:06:41.026  +++ CONFIG_PGO_USE=n
00:06:41.026  +++ CONFIG_VHOST=y
00:06:41.026  +++ CONFIG_DAOS=n
00:06:41.026  +++ CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:06:41.026  +++ CONFIG_DAOS_DIR=
00:06:41.026  +++ CONFIG_UNIT_TESTS=y
00:06:41.026  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:06:41.026  +++ CONFIG_VIRTIO=y
00:06:41.026  +++ CONFIG_DPDK_UADK=n
00:06:41.026  +++ CONFIG_COVERAGE=y
00:06:41.026  +++ CONFIG_RDMA=y
00:06:41.026  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:06:41.026  +++ CONFIG_HAVE_LZ4=n
00:06:41.026  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:06:41.026  +++ CONFIG_URING_PATH=
00:06:41.026  +++ CONFIG_XNVME=n
00:06:41.026  +++ CONFIG_VFIO_USER=n
00:06:41.026  +++ CONFIG_ARCH=native
00:06:41.026  +++ CONFIG_HAVE_EVP_MAC=y
00:06:41.026  +++ CONFIG_URING_ZNS=n
00:06:41.026  +++ CONFIG_WERROR=y
00:06:41.026  +++ CONFIG_HAVE_LIBBSD=n
00:06:41.026  +++ CONFIG_UBSAN=y
00:06:41.026  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:06:41.026  +++ CONFIG_IPSEC_MB_DIR=
00:06:41.026  +++ CONFIG_GOLANG=n
00:06:41.026  +++ CONFIG_ISAL=y
00:06:41.026  +++ CONFIG_IDXD_KERNEL=n
00:06:41.026  +++ CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:41.026  +++ CONFIG_RDMA_PROV=verbs
00:06:41.026  +++ CONFIG_APPS=y
00:06:41.026  +++ CONFIG_SHARED=n
00:06:41.026  +++ CONFIG_HAVE_KEYUTILS=y
00:06:41.026  +++ CONFIG_FC_PATH=
00:06:41.026  +++ CONFIG_DPDK_PKG_CONFIG=n
00:06:41.026  +++ CONFIG_FC=n
00:06:41.026  +++ CONFIG_AVAHI=n
00:06:41.026  +++ CONFIG_FIO_PLUGIN=y
00:06:41.026  +++ CONFIG_RAID5F=n
00:06:41.026  +++ CONFIG_EXAMPLES=y
00:06:41.026  +++ CONFIG_TESTS=y
00:06:41.026  +++ CONFIG_CRYPTO_MLX5=n
00:06:41.026  +++ CONFIG_MAX_LCORES=128
00:06:41.026  +++ CONFIG_IPSEC_MB=n
00:06:41.026  +++ CONFIG_PGO_DIR=
00:06:41.026  +++ CONFIG_DEBUG=y
00:06:41.026  +++ CONFIG_DPDK_COMPRESSDEV=n
00:06:41.026  +++ CONFIG_CROSS_PREFIX=
00:06:41.026  +++ CONFIG_COPY_FILE_RANGE=y
00:06:41.026  +++ CONFIG_URING=n
00:06:41.026  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:06:41.026  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:06:41.026  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:06:41.026  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:06:41.026  +++ _root=/home/vagrant/spdk_repo/spdk
00:06:41.026  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:06:41.026  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:06:41.026  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:06:41.026  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:06:41.026  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:06:41.026  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:06:41.026  +++ VHOST_APP=("$_app_dir/vhost")
00:06:41.026  +++ DD_APP=("$_app_dir/spdk_dd")
00:06:41.026  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:06:41.026  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:06:41.026  +++ [[ #ifndef SPDK_CONFIG_H
00:06:41.026  #define SPDK_CONFIG_H
00:06:41.026  #define SPDK_CONFIG_AIO_FSDEV 1
00:06:41.026  #define SPDK_CONFIG_APPS 1
00:06:41.026  #define SPDK_CONFIG_ARCH native
00:06:41.026  #define SPDK_CONFIG_ASAN 1
00:06:41.026  #undef SPDK_CONFIG_AVAHI
00:06:41.026  #undef SPDK_CONFIG_CET
00:06:41.026  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:06:41.026  #define SPDK_CONFIG_COVERAGE 1
00:06:41.026  #define SPDK_CONFIG_CROSS_PREFIX 
00:06:41.026  #undef SPDK_CONFIG_CRYPTO
00:06:41.026  #undef SPDK_CONFIG_CRYPTO_MLX5
00:06:41.026  #undef SPDK_CONFIG_CUSTOMOCF
00:06:41.026  #undef SPDK_CONFIG_DAOS
00:06:41.026  #define SPDK_CONFIG_DAOS_DIR 
00:06:41.026  #define SPDK_CONFIG_DEBUG 1
00:06:41.026  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:06:41.026  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:06:41.026  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:06:41.026  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:06:41.026  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:06:41.026  #undef SPDK_CONFIG_DPDK_UADK
00:06:41.026  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:06:41.026  #define SPDK_CONFIG_EXAMPLES 1
00:06:41.026  #undef SPDK_CONFIG_FC
00:06:41.026  #define SPDK_CONFIG_FC_PATH 
00:06:41.026  #define SPDK_CONFIG_FIO_PLUGIN 1
00:06:41.026  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:06:41.026  #define SPDK_CONFIG_FSDEV 1
00:06:41.026  #undef SPDK_CONFIG_FUSE
00:06:41.026  #undef SPDK_CONFIG_FUZZER
00:06:41.026  #define SPDK_CONFIG_FUZZER_LIB 
00:06:41.026  #undef SPDK_CONFIG_GOLANG
00:06:41.026  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:06:41.026  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:06:41.026  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:06:41.026  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:06:41.026  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:06:41.026  #undef SPDK_CONFIG_HAVE_LIBBSD
00:06:41.026  #undef SPDK_CONFIG_HAVE_LZ4
00:06:41.026  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:06:41.026  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:06:41.026  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:06:41.026  #define SPDK_CONFIG_IDXD 1
00:06:41.026  #undef SPDK_CONFIG_IDXD_KERNEL
00:06:41.026  #undef SPDK_CONFIG_IPSEC_MB
00:06:41.026  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:06:41.026  #define SPDK_CONFIG_ISAL 1
00:06:41.026  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:06:41.026  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:06:41.026  #define SPDK_CONFIG_LIBDIR 
00:06:41.026  #undef SPDK_CONFIG_LTO
00:06:41.026  #define SPDK_CONFIG_MAX_LCORES 128
00:06:41.026  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:06:41.026  #define SPDK_CONFIG_NVME_CUSE 1
00:06:41.026  #undef SPDK_CONFIG_OCF
00:06:41.026  #define SPDK_CONFIG_OCF_PATH 
00:06:41.026  #define SPDK_CONFIG_OPENSSL_PATH 
00:06:41.026  #undef SPDK_CONFIG_PGO_CAPTURE
00:06:41.026  #define SPDK_CONFIG_PGO_DIR 
00:06:41.026  #undef SPDK_CONFIG_PGO_USE
00:06:41.026  #define SPDK_CONFIG_PREFIX /usr/local
00:06:41.026  #undef SPDK_CONFIG_RAID5F
00:06:41.026  #undef SPDK_CONFIG_RBD
00:06:41.026  #define SPDK_CONFIG_RDMA 1
00:06:41.026  #define SPDK_CONFIG_RDMA_PROV verbs
00:06:41.026  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:06:41.026  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:06:41.026  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:06:41.026  #undef SPDK_CONFIG_SHARED
00:06:41.026  #undef SPDK_CONFIG_SMA
00:06:41.026  #define SPDK_CONFIG_TESTS 1
00:06:41.026  #undef SPDK_CONFIG_TSAN
00:06:41.026  #undef SPDK_CONFIG_UBLK
00:06:41.026  #define SPDK_CONFIG_UBSAN 1
00:06:41.026  #define SPDK_CONFIG_UNIT_TESTS 1
00:06:41.027  #undef SPDK_CONFIG_URING
00:06:41.027  #define SPDK_CONFIG_URING_PATH 
00:06:41.027  #undef SPDK_CONFIG_URING_ZNS
00:06:41.027  #undef SPDK_CONFIG_USDT
00:06:41.027  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:06:41.027  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:06:41.027  #undef SPDK_CONFIG_VFIO_USER
00:06:41.027  #define SPDK_CONFIG_VFIO_USER_DIR 
00:06:41.027  #define SPDK_CONFIG_VHOST 1
00:06:41.027  #define SPDK_CONFIG_VIRTIO 1
00:06:41.027  #undef SPDK_CONFIG_VTUNE
00:06:41.027  #define SPDK_CONFIG_VTUNE_DIR 
00:06:41.027  #define SPDK_CONFIG_WERROR 1
00:06:41.027  #define SPDK_CONFIG_WPDK_DIR 
00:06:41.027  #undef SPDK_CONFIG_XNVME
00:06:41.027  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:06:41.027  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:06:41.027  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:06:41.027  +++ shopt -s extglob
00:06:41.027  +++ [[ -e /bin/wpdk_common.sh ]]
00:06:41.027  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:06:41.027  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:06:41.027  ++++ PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:41.027  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:41.027  ++++ PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:41.027  ++++ export PATH
00:06:41.027  ++++ echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:06:41.027  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:06:41.027  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:06:41.027  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:06:41.027  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:06:41.027  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:06:41.027  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:06:41.027  +++ TEST_TAG=N/A
00:06:41.027  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:06:41.027  +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:06:41.027  ++++ uname -s
00:06:41.027  +++ PM_OS=Linux
00:06:41.027  +++ MONITOR_RESOURCES_SUDO=()
00:06:41.027  +++ declare -A MONITOR_RESOURCES_SUDO
00:06:41.027  +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:06:41.027  +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:06:41.027  +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:06:41.027  +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:06:41.027  +++ SUDO[0]=
00:06:41.027  +++ SUDO[1]='sudo -E'
00:06:41.027  +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:06:41.027  +++ [[ Linux == FreeBSD ]]
00:06:41.027  +++ [[ Linux == Linux ]]
00:06:41.027  +++ [[ QEMU != QEMU ]]
00:06:41.027  +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:06:41.027  ++ : 1
00:06:41.027  ++ export RUN_NIGHTLY
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_RUN_VALGRIND
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_TEST_UNITTEST
00:06:41.027  ++ :
00:06:41.027  ++ export SPDK_TEST_AUTOBUILD
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_RELEASE_BUILD
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ISAL
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ISCSI
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ISCSI_INITIATOR
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_TEST_NVME
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_PMR
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_BP
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_CLI
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_CUSE
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_FDP
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVMF
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VFIOUSER
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VFIOUSER_QEMU
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_FUZZER
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_FUZZER_SHORT
00:06:41.027  ++ : rdma
00:06:41.027  ++ export SPDK_TEST_NVMF_TRANSPORT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_RBD
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VHOST
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_TEST_BLOCKDEV
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_RAID
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_IOAT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_BLOBFS
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VHOST_INIT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_LVOL
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VBDEV_COMPRESS
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_RUN_ASAN
00:06:41.027  ++ : 1
00:06:41.027  ++ export SPDK_RUN_UBSAN
00:06:41.027  ++ : /home/vagrant/spdk_repo/dpdk/build
00:06:41.027  ++ export SPDK_RUN_EXTERNAL_DPDK
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_RUN_NON_ROOT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_CRYPTO
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_FTL
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_OCF
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_VMD
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_OPAL
00:06:41.027  ++ : main
00:06:41.027  ++ export SPDK_TEST_NATIVE_DPDK
00:06:41.027  ++ : true
00:06:41.027  ++ export SPDK_AUTOTEST_X
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_URING
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_USDT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_USE_IGB_UIO
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_SCHEDULER
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_SCANBUILD
00:06:41.027  ++ :
00:06:41.027  ++ export SPDK_TEST_NVMF_NICS
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_SMA
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_DAOS
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_XNVME
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ACCEL
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ACCEL_DSA
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_ACCEL_IAA
00:06:41.027  ++ :
00:06:41.027  ++ export SPDK_TEST_FUZZER_TARGET
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVMF_MDNS
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_JSONRPC_GO_CLIENT
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_SETUP
00:06:41.027  ++ : 0
00:06:41.027  ++ export SPDK_TEST_NVME_INTERRUPT
00:06:41.027  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:06:41.027  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:06:41.027  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:41.027  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:06:41.027  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:41.027  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:41.027  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:41.027  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:06:41.027  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:06:41.027  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:06:41.027  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:06:41.027  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:06:41.027  ++ export PYTHONDONTWRITEBYTECODE=1
00:06:41.028  ++ PYTHONDONTWRITEBYTECODE=1
00:06:41.028  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:06:41.028  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:06:41.028  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:06:41.028  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:06:41.028  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:06:41.028  ++ rm -rf /var/tmp/asan_suppression_file
00:06:41.028  ++ cat
00:06:41.028  ++ echo leak:libfuse3.so
00:06:41.028  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:06:41.028  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:06:41.028  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:06:41.028  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:06:41.028  ++ '[' -z /var/spdk/dependencies ']'
00:06:41.028  ++ export DEPENDENCY_DIR
00:06:41.028  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:06:41.028  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:06:41.028  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:06:41.028  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:06:41.028  ++ export QEMU_BIN=
00:06:41.028  ++ QEMU_BIN=
00:06:41.028  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:06:41.028  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:06:41.028  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:06:41.028  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:06:41.028  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:41.028  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:06:41.028  ++ _LCOV_MAIN=0
00:06:41.028  ++ _LCOV_LLVM=1
00:06:41.028  ++ _LCOV=
00:06:41.028  ++ [[ '' == *clang* ]]
00:06:41.028  ++ [[ 0 -eq 1 ]]
00:06:41.028  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:06:41.028  ++ _lcov_opt[_LCOV_MAIN]=
00:06:41.028  ++ lcov_opt=
00:06:41.028  ++ '[' 0 -eq 0 ']'
00:06:41.028  ++ export valgrind=
00:06:41.028  ++ valgrind=
00:06:41.028  +++ uname -s
00:06:41.028  ++ '[' Linux = Linux ']'
00:06:41.028  ++ HUGEMEM=4096
00:06:41.028  ++ export CLEAR_HUGE=yes
00:06:41.028  ++ CLEAR_HUGE=yes
00:06:41.028  ++ MAKE=make
00:06:41.028  +++ nproc
00:06:41.028  ++ MAKEFLAGS=-j10
00:06:41.028  ++ export HUGEMEM=4096
00:06:41.028  ++ HUGEMEM=4096
00:06:41.028  ++ NO_HUGE=()
00:06:41.028  ++ TEST_MODE=
00:06:41.028  ++ [[ -z '' ]]
00:06:41.028  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:06:41.028  ++ exec
00:06:41.028  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:06:41.028  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:06:41.028  ++ set_test_storage 2147483648
00:06:41.028  ++ [[ -v testdir ]]
00:06:41.028  ++ local requested_size=2147483648
00:06:41.028  ++ local mount target_dir
00:06:41.028  ++ local -A mounts fss sizes avails uses
00:06:41.028  ++ local source fs size avail mount use
00:06:41.028  ++ local storage_fallback storage_candidates
00:06:41.028  +++ mktemp -udt spdk.XXXXXX
00:06:41.028  ++ storage_fallback=/tmp/spdk.Y1T237
00:06:41.028  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:06:41.028  ++ [[ -n '' ]]
00:06:41.028  ++ [[ -n '' ]]
00:06:41.028  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.Y1T237/tests/unit /tmp/spdk.Y1T237
00:06:41.028  ++ requested_size=2214592512
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  +++ df -T
00:06:41.028  +++ grep -v Filesystem
00:06:41.028  ++ mounts["$mount"]=tmpfs
00:06:41.028  ++ fss["$mount"]=tmpfs
00:06:41.028  ++ avails["$mount"]=1252601856
00:06:41.028  ++ sizes["$mount"]=1253683200
00:06:41.028  ++ uses["$mount"]=1081344
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=/dev/vda1
00:06:41.028  ++ fss["$mount"]=ext4
00:06:41.028  ++ avails["$mount"]=8647446528
00:06:41.028  ++ sizes["$mount"]=20616794112
00:06:41.028  ++ uses["$mount"]=11952570368
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=tmpfs
00:06:41.028  ++ fss["$mount"]=tmpfs
00:06:41.028  ++ avails["$mount"]=6268399616
00:06:41.028  ++ sizes["$mount"]=6268399616
00:06:41.028  ++ uses["$mount"]=0
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=tmpfs
00:06:41.028  ++ fss["$mount"]=tmpfs
00:06:41.028  ++ avails["$mount"]=5242880
00:06:41.028  ++ sizes["$mount"]=5242880
00:06:41.028  ++ uses["$mount"]=0
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=/dev/vda15
00:06:41.028  ++ fss["$mount"]=vfat
00:06:41.028  ++ avails["$mount"]=103061504
00:06:41.028  ++ sizes["$mount"]=109395968
00:06:41.028  ++ uses["$mount"]=6334464
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=tmpfs
00:06:41.028  ++ fss["$mount"]=tmpfs
00:06:41.028  ++ avails["$mount"]=1253675008
00:06:41.028  ++ sizes["$mount"]=1253679104
00:06:41.028  ++ uses["$mount"]=4096
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:06:41.028  ++ fss["$mount"]=fuse.sshfs
00:06:41.028  ++ avails["$mount"]=97182965760
00:06:41.028  ++ sizes["$mount"]=105088212992
00:06:41.028  ++ uses["$mount"]=2519814144
00:06:41.028  ++ read -r source fs size use avail _ mount
00:06:41.028  ++ printf '* Looking for test storage...\n'
00:06:41.028  * Looking for test storage...
00:06:41.028  ++ local target_space new_size
00:06:41.028  ++ for target_dir in "${storage_candidates[@]}"
00:06:41.028  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:06:41.028  +++ awk '$1 !~ /Filesystem/{print $6}'
00:06:41.028  ++ mount=/
00:06:41.028  ++ target_space=8647446528
00:06:41.028  ++ (( target_space == 0 || target_space < requested_size ))
00:06:41.028  ++ (( target_space >= requested_size ))
00:06:41.028  ++ [[ ext4 == tmpfs ]]
00:06:41.028  ++ [[ ext4 == ramfs ]]
00:06:41.028  ++ [[ / == / ]]
00:06:41.028  ++ new_size=14167162880
00:06:41.028  ++ (( new_size * 100 / sizes[/] > 95 ))
00:06:41.028  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:06:41.028  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:06:41.028  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:06:41.028  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:06:41.028  ++ return 0
00:06:41.028  ++ set -o errtrace
00:06:41.028  ++ shopt -s extdebug
00:06:41.028  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:06:41.028  ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1685 -- # true
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1687 -- # xtrace_fd
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@29 -- # exec
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@31 -- # xtrace_restore
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@18 -- # set -x
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:06:41.028     04:56:54 unittest -- common/autotest_common.sh@1693 -- # lcov --version
00:06:41.028     04:56:54 unittest -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:41.028    04:56:54 unittest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:41.028    04:56:54 unittest -- scripts/common.sh@336 -- # IFS=.-:
00:06:41.028    04:56:54 unittest -- scripts/common.sh@336 -- # read -ra ver1
00:06:41.028    04:56:54 unittest -- scripts/common.sh@337 -- # IFS=.-:
00:06:41.028    04:56:54 unittest -- scripts/common.sh@337 -- # read -ra ver2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@338 -- # local 'op=<'
00:06:41.028    04:56:54 unittest -- scripts/common.sh@340 -- # ver1_l=2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@341 -- # ver2_l=1
00:06:41.028    04:56:54 unittest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:41.028    04:56:54 unittest -- scripts/common.sh@344 -- # case "$op" in
00:06:41.028    04:56:54 unittest -- scripts/common.sh@345 -- # : 1
00:06:41.028    04:56:54 unittest -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:41.028    04:56:54 unittest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:41.028     04:56:54 unittest -- scripts/common.sh@365 -- # decimal 1
00:06:41.028     04:56:54 unittest -- scripts/common.sh@353 -- # local d=1
00:06:41.028     04:56:54 unittest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:41.028     04:56:54 unittest -- scripts/common.sh@355 -- # echo 1
00:06:41.028    04:56:54 unittest -- scripts/common.sh@365 -- # ver1[v]=1
00:06:41.028     04:56:54 unittest -- scripts/common.sh@366 -- # decimal 2
00:06:41.028     04:56:54 unittest -- scripts/common.sh@353 -- # local d=2
00:06:41.028     04:56:54 unittest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:41.028     04:56:54 unittest -- scripts/common.sh@355 -- # echo 2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@366 -- # ver2[v]=2
00:06:41.028    04:56:54 unittest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:41.028    04:56:54 unittest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:41.028    04:56:54 unittest -- scripts/common.sh@368 -- # return 0
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:41.028    04:56:54 unittest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:06:41.028  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.028  		--rc genhtml_branch_coverage=1
00:06:41.029  		--rc genhtml_function_coverage=1
00:06:41.029  		--rc genhtml_legend=1
00:06:41.029  		--rc geninfo_all_blocks=1
00:06:41.029  		--rc geninfo_unexecuted_blocks=1
00:06:41.029  		
00:06:41.029  		'
00:06:41.029    04:56:54 unittest -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:06:41.029  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.029  		--rc genhtml_branch_coverage=1
00:06:41.029  		--rc genhtml_function_coverage=1
00:06:41.029  		--rc genhtml_legend=1
00:06:41.029  		--rc geninfo_all_blocks=1
00:06:41.029  		--rc geninfo_unexecuted_blocks=1
00:06:41.029  		
00:06:41.029  		'
00:06:41.029    04:56:54 unittest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:06:41.029  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.029  		--rc genhtml_branch_coverage=1
00:06:41.029  		--rc genhtml_function_coverage=1
00:06:41.029  		--rc genhtml_legend=1
00:06:41.029  		--rc geninfo_all_blocks=1
00:06:41.029  		--rc geninfo_unexecuted_blocks=1
00:06:41.029  		
00:06:41.029  		'
00:06:41.029    04:56:54 unittest -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:06:41.029  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:41.029  		--rc genhtml_branch_coverage=1
00:06:41.029  		--rc genhtml_function_coverage=1
00:06:41.029  		--rc genhtml_legend=1
00:06:41.029  		--rc geninfo_all_blocks=1
00:06:41.029  		--rc geninfo_unexecuted_blocks=1
00:06:41.029  		
00:06:41.029  		'
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@159 -- # '[' 0 -eq 1 ']'
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@166 -- # '[' -z x ']'
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@173 -- # '[' 0 -eq 1 ']'
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@182 -- # [[ y == y ]]
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@183 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@184 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:06:41.029   04:56:54 unittest -- unit/unittest.sh@186 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:06:47.600  /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found
00:06:47.600  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno
00:07:26.316    04:57:39 unittest -- unit/unittest.sh@190 -- # uname -m
00:07:26.316   04:57:39 unittest -- unit/unittest.sh@190 -- # '[' x86_64 = aarch64 ']'
00:07:26.316   04:57:39 unittest -- unit/unittest.sh@194 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:26.316   04:57:39 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.316   04:57:39 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.316   04:57:39 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.316  ************************************
00:07:26.316  START TEST unittest_pci_event
00:07:26.316  ************************************
00:07:26.316   04:57:39 unittest.unittest_pci_event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:07:26.316  
00:07:26.316  
00:07:26.316       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.317       http://cunit.sourceforge.net/
00:07:26.317  
00:07:26.317  
00:07:26.317  Suite: pci_event
00:07:26.317    Test: test_pci_parse_event ...[2024-11-20 04:57:39.192741] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 162:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 0000
00:07:26.317  [2024-11-20 04:57:39.193477] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci_event.c: 185:parse_subsystem_event: *ERROR*: Invalid format for PCI device BDF: 000000
00:07:26.317  passed
00:07:26.317  
00:07:26.317  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.317                suites      1      1    n/a      0        0
00:07:26.317                 tests      1      1      1      0        0
00:07:26.317               asserts     15     15     15      0      n/a
00:07:26.317  
00:07:26.317  Elapsed time =    0.001 seconds
00:07:26.317  
00:07:26.317  real	0m0.038s
00:07:26.317  user	0m0.008s
00:07:26.317  sys	0m0.029s
00:07:26.317   04:57:39 unittest.unittest_pci_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.317   04:57:39 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x
00:07:26.317  ************************************
00:07:26.317  END TEST unittest_pci_event
00:07:26.317  ************************************
00:07:26.317   04:57:39 unittest -- unit/unittest.sh@195 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.317  ************************************
00:07:26.317  START TEST unittest_include
00:07:26.317  ************************************
00:07:26.317   04:57:39 unittest.unittest_include -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:07:26.317  
00:07:26.317  
00:07:26.317       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.317       http://cunit.sourceforge.net/
00:07:26.317  
00:07:26.317  
00:07:26.317  Suite: histogram
00:07:26.317    Test: histogram_test ...passed
00:07:26.317    Test: histogram_merge ...passed
00:07:26.317  
00:07:26.317  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.317                suites      1      1    n/a      0        0
00:07:26.317                 tests      2      2      2      0        0
00:07:26.317               asserts     50     50     50      0      n/a
00:07:26.317  
00:07:26.317  Elapsed time =    0.006 seconds
00:07:26.317  
00:07:26.317  real	0m0.036s
00:07:26.317  user	0m0.024s
00:07:26.317  sys	0m0.012s
00:07:26.317   04:57:39 unittest.unittest_include -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:26.317   04:57:39 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x
00:07:26.317  ************************************
00:07:26.317  END TEST unittest_include
00:07:26.317  ************************************
00:07:26.317   04:57:39 unittest -- unit/unittest.sh@196 -- # run_test unittest_bdev unittest_bdev
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:26.317   04:57:39 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:26.317  ************************************
00:07:26.317  START TEST unittest_bdev
00:07:26.317  ************************************
00:07:26.317   04:57:39 unittest.unittest_bdev -- common/autotest_common.sh@1129 -- # unittest_bdev
00:07:26.317   04:57:39 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:07:26.317  
00:07:26.317  
00:07:26.317       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.317       http://cunit.sourceforge.net/
00:07:26.317  
00:07:26.317  
00:07:26.317  Suite: bdev
00:07:26.317    Test: bytes_to_blocks_test ...passed
00:07:26.317    Test: num_blocks_test ...passed
00:07:26.317    Test: io_valid_test ...passed
00:07:26.317    Test: open_write_test ...[2024-11-20 04:57:39.451127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:07:26.317  [2024-11-20 04:57:39.451475] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:07:26.317  [2024-11-20 04:57:39.451626] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:07:26.317  passed
00:07:26.317    Test: claim_test ...passed
00:07:26.317    Test: alias_add_del_test ...[2024-11-20 04:57:39.539444] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:07:26.317  [2024-11-20 04:57:39.539561] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4730:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:07:26.317  [2024-11-20 04:57:39.539628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:07:26.317  passed
00:07:26.317    Test: get_device_stat_test ...passed
00:07:26.317    Test: bdev_io_types_test ...passed
00:07:26.317    Test: bdev_io_wait_test ...passed
00:07:26.317    Test: bdev_io_spans_split_test ...passed
00:07:26.317    Test: bdev_io_boundary_split_test ...passed
00:07:26.317    Test: bdev_io_max_size_and_segment_split_test ...[2024-11-20 04:57:39.686213] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3285:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:07:26.317  passed
00:07:26.317    Test: bdev_io_mix_split_test ...passed
00:07:26.317    Test: bdev_io_split_with_io_wait ...passed
00:07:26.317    Test: bdev_io_write_unit_split_test ...[2024-11-20 04:57:39.779803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:26.317  [2024-11-20 04:57:39.779909] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:07:26.317  [2024-11-20 04:57:39.779945] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:07:26.317  [2024-11-20 04:57:39.779999] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2828:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:07:26.317  passed
00:07:26.317    Test: bdev_io_alignment_with_boundary ...passed
00:07:26.317    Test: bdev_io_alignment ...passed
00:07:26.317    Test: bdev_histograms ...passed
00:07:26.317    Test: bdev_write_zeroes ...passed
00:07:26.317    Test: bdev_compare_and_write ...passed
00:07:26.317    Test: bdev_compare ...passed
00:07:26.317    Test: bdev_compare_emulated ...passed
00:07:26.317    Test: bdev_zcopy_write ...passed
00:07:26.317    Test: bdev_zcopy_read ...passed
00:07:26.317    Test: bdev_open_while_hotremove ...passed
00:07:26.317    Test: bdev_close_while_hotremove ...passed
00:07:26.317    Test: bdev_open_ext_test ...[2024-11-20 04:57:40.139424] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8305:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:07:26.317  passed
00:07:26.317    Test: bdev_open_ext_unregister ...[2024-11-20 04:57:40.139635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8305:spdk_bdev_open_ext: *ERROR*: Missing event callback function
00:07:26.317  passed
00:07:26.317    Test: bdev_set_io_timeout ...passed
00:07:26.317    Test: bdev_set_qd_sampling ...passed
00:07:26.317    Test: lba_range_overlap ...passed
00:07:26.317    Test: lock_lba_range_check_ranges ...passed
00:07:26.317    Test: lock_lba_range_with_io_outstanding ...passed
00:07:26.577    Test: lock_lba_range_overlapped ...passed
00:07:26.577    Test: bdev_quiesce ...[2024-11-20 04:57:40.295077] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10284:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:07:26.577  passed
00:07:26.577    Test: bdev_io_abort ...passed
00:07:26.577    Test: bdev_unmap ...passed
00:07:26.577    Test: bdev_write_zeroes_split_test ...passed
00:07:26.577    Test: bdev_set_options_test ...passed
00:07:26.577    Test: bdev_get_memory_domains ...passed
00:07:26.577    Test: bdev_io_ext ...[2024-11-20 04:57:40.393467] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 503:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:26.577  passed
00:07:26.577    Test: bdev_io_ext_no_opts ...passed
00:07:26.577    Test: bdev_io_ext_invalid_opts ...passed
00:07:26.577    Test: bdev_io_ext_split ...passed
00:07:26.836    Test: bdev_io_ext_bounce_buffer ...passed
00:07:26.836    Test: bdev_register_uuid_alias ...[2024-11-20 04:57:40.547911] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name a9675e1f-0900-411b-9ba6-dd3b2d61c8aa already exists
00:07:26.836  [2024-11-20 04:57:40.547989] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:a9675e1f-0900-411b-9ba6-dd3b2d61c8aa alias for bdev bdev0
00:07:26.836  passed
00:07:26.836    Test: bdev_unregister_by_name ...[2024-11-20 04:57:40.562128] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8095:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:07:26.836  passed
00:07:26.836    Test: for_each_bdev_test ...[2024-11-20 04:57:40.562190] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8103:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:07:26.836  passed
00:07:26.836    Test: bdev_seek_test ...passed
00:07:26.836    Test: bdev_copy ...passed
00:07:26.836    Test: bdev_copy_split_test ...passed
00:07:26.836    Test: examine_locks ...passed
00:07:26.836    Test: claim_v2_rwo ...[2024-11-20 04:57:40.647994] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  passed
00:07:26.836    Test: claim_v2_rom ...[2024-11-20 04:57:40.648082] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8839:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648102] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648116] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648129] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648208] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8834:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:07:26.836  [2024-11-20 04:57:40.648375] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648433] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648460] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:26.836  passed
00:07:26.836    Test: claim_v2_rwm ...[2024-11-20 04:57:40.648494] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648546] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8877:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:07:26.836  [2024-11-20 04:57:40.648579] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:26.836  [2024-11-20 04:57:40.648692] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8907:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:26.836  [2024-11-20 04:57:40.648750] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8199:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648779] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648803] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648820] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648846] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8927:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.648895] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8907:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:07:26.836  passed
00:07:26.836    Test: claim_v2_existing_writer ...[2024-11-20 04:57:40.649040] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:26.836  [2024-11-20 04:57:40.649073] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8872:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:07:26.836  passed
00:07:26.836    Test: claim_v2_existing_v1 ...passed
00:07:26.836    Test: claim_v1_existing_v2 ...[2024-11-20 04:57:40.649165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.649196] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.649212] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.649336] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.649395] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:07:26.836  [2024-11-20 04:57:40.649429] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8676:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:07:26.836  passed
00:07:26.836    Test: examine_claimed ...[2024-11-20 04:57:40.664431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:07:26.836  passed
00:07:26.836    Test: examine_claimed_manual ...[2024-11-20 04:57:40.692628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9004:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:07:26.836  passed
00:07:26.836    Test: get_numa_id ...passed
00:07:26.836    Test: get_device_stat_with_reset ...passed
00:07:26.836  
00:07:26.836  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:26.836                suites      1      1    n/a      0        0
00:07:26.836                 tests     62     62     62      0        0
00:07:26.836               asserts   4705   4705   4705      0      n/a
00:07:26.836  
00:07:26.836  Elapsed time =    1.358 seconds
00:07:26.836   04:57:40 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:07:26.836  
00:07:26.836  
00:07:26.836       CUnit - A unit testing framework for C - Version 2.1-3
00:07:26.836       http://cunit.sourceforge.net/
00:07:26.836  
00:07:26.836  
00:07:26.836  Suite: nvme
00:07:26.836    Test: test_create_ctrlr ...passed
00:07:27.097    Test: test_reset_ctrlr ...[2024-11-20 04:57:40.791403] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:07:27.097    Test: test_failover_ctrlr ...passed
00:07:27.097    Test: test_race_between_failover_and_add_secondary_trid ...[2024-11-20 04:57:40.794281] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.794552] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.794808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_pending_reset ...[2024-11-20 04:57:40.796795] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.797088] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_attach_ctrlr ...[2024-11-20 04:57:40.798348] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:07:27.097  passed
00:07:27.097    Test: test_aer_cb ...passed
00:07:27.097    Test: test_submit_nvme_cmd ...passed
00:07:27.097    Test: test_add_remove_trid ...passed
00:07:27.097    Test: test_abort ...[2024-11-20 04:57:40.801992] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7953:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:07:27.097  passed
00:07:27.097    Test: test_get_io_qpair ...passed
00:07:27.097    Test: test_bdev_unregister ...passed
00:07:27.097    Test: test_compare_ns ...passed
00:07:27.097    Test: test_init_ana_log_page ...passed
00:07:27.097    Test: test_get_memory_domains ...passed
00:07:27.097    Test: test_reconnect_qpair ...[2024-11-20 04:57:40.805054] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 17] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_create_bdev_ctrlr ...[2024-11-20 04:57:40.805680] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5755:bdev_nvme_check_multipath: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 18] cntlid 18 are duplicated.
00:07:27.097  passed
00:07:27.097    Test: test_add_multi_ns_to_bdev ...[2024-11-20 04:57:40.807168] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4912:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:07:27.097  passed
00:07:27.097    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:07:27.097    Test: test_admin_path ...passed
00:07:27.097    Test: test_reset_bdev_ctrlr ...[2024-11-20 04:57:40.812097] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.812382] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.812632] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.813123] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.813525] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.813725] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.814196] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.814375] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.814706] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.814779] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.814946] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.815021] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_find_io_path ...passed
00:07:27.097    Test: test_retry_io_if_ana_state_is_updating ...passed
00:07:27.097    Test: test_retry_io_for_io_path_error ...passed
00:07:27.097    Test: test_retry_io_count ...passed
00:07:27.097    Test: test_concurrent_read_ana_log_page ...passed
00:07:27.097    Test: test_retry_io_for_ana_error ...passed
00:07:27.097    Test: test_check_io_error_resiliency_params ...[2024-11-20 04:57:40.818882] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6595:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:07:27.097  [2024-11-20 04:57:40.818974] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6599:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:27.097  [2024-11-20 04:57:40.819014] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6608:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:07:27.097  [2024-11-20 04:57:40.819044] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6611:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:07:27.097  [2024-11-20 04:57:40.819073] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6623:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:27.097  [2024-11-20 04:57:40.819126] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6623:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:07:27.097  [2024-11-20 04:57:40.819177] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6603:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:07:27.097  [2024-11-20 04:57:40.819231] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6618:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:07:27.097  passed
00:07:27.097    Test: test_retry_io_if_ctrlr_is_resetting ...[2024-11-20 04:57:40.819281] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6615:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:07:27.097  passed
00:07:27.097    Test: test_reconnect_ctrlr ...[2024-11-20 04:57:40.820221] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.820394] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.820720] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.820844] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.820982] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_retry_failover_ctrlr ...[2024-11-20 04:57:40.821372] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.097  passed
00:07:27.097    Test: test_fail_path ...[2024-11-20 04:57:40.822061] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.822248] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.822427] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.822582] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:27.097  [2024-11-20 04:57:40.822764] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:07:27.097  passed
00:07:27.098    Test: test_nvme_ns_cmp ...passed
00:07:27.098    Test: test_ana_transition ...passed
00:07:27.098    Test: test_set_preferred_path ...passed
00:07:27.098    Test: test_find_next_io_path ...passed
00:07:27.098    Test: test_find_io_path_min_qd ...passed
00:07:27.098    Test: test_disable_auto_failback ...[2024-11-20 04:57:40.824685] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 45] Resetting controller failed.
00:07:27.098  passed
00:07:27.098    Test: test_set_multipath_policy ...passed
00:07:27.098    Test: test_uuid_generation ...passed
00:07:27.098    Test: test_retry_io_to_same_path ...passed
00:07:27.098    Test: test_race_between_reset_and_disconnected ...passed
00:07:27.098    Test: test_ctrlr_op_rpc ...passed
00:07:27.098    Test: test_bdev_ctrlr_op_rpc ...passed
00:07:27.098    Test: test_disable_enable_ctrlr ...[2024-11-20 04:57:40.828771] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.098  [2024-11-20 04:57:40.828961] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:07:27.098  passed
00:07:27.098    Test: test_delete_ctrlr_done ...passed
00:07:27.098    Test: test_ns_remove_during_reset ...passed
00:07:27.098    Test: test_io_path_is_current ...passed
00:07:27.098    Test: test_bdev_reset_abort_io ...passed
00:07:27.098    Test: test_race_between_clear_pending_resets_and_reset_ctrlr_complete ...passed
00:07:27.098  
00:07:27.098  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.098                suites      1      1    n/a      0        0
00:07:27.098                 tests     51     51     51      0        0
00:07:27.098               asserts   4017   4017   4017      0      n/a
00:07:27.098  
00:07:27.098  Elapsed time =    0.041 seconds
00:07:27.098   04:57:40 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:07:27.098  
00:07:27.098  
00:07:27.098       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.098       http://cunit.sourceforge.net/
00:07:27.098  
00:07:27.098  Test Options
00:07:27.098  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:07:27.098  
00:07:27.098  Suite: raid
00:07:27.098    Test: test_create_raid ...passed
00:07:27.098    Test: test_create_raid_superblock ...passed
00:07:27.098    Test: test_delete_raid ...passed
00:07:27.098    Test: test_create_raid_invalid_args ...[2024-11-20 04:57:40.867117] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1521:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:07:27.098  [2024-11-20 04:57:40.867492] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1515:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:07:27.098  [2024-11-20 04:57:40.868077] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1505:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:07:27.098  [2024-11-20 04:57:40.868294] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:27.098  [2024-11-20 04:57:40.868388] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:27.098  [2024-11-20 04:57:40.869173] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:07:27.098  [2024-11-20 04:57:40.869230] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:07:27.098  passed
00:07:27.098    Test: test_delete_raid_invalid_args ...passed
00:07:27.098    Test: test_io_channel ...passed
00:07:27.098    Test: test_reset_io ...passed
00:07:27.098    Test: test_multi_raid ...passed
00:07:27.098    Test: test_io_type_supported ...passed
00:07:27.098    Test: test_raid_json_dump_info ...passed
00:07:27.098    Test: test_context_size ...passed
00:07:27.098    Test: test_raid_level_conversions ...passed
00:07:27.098    Test: test_raid_io_split ...passed
00:07:27.098    Test: test_raid_process ...passed
00:07:27.098    Test: test_raid_process_with_qos ...passed
00:07:27.098  
00:07:27.098  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.098                suites      1      1    n/a      0        0
00:07:27.098                 tests     15     15     15      0        0
00:07:27.098               asserts   6602   6602   6602      0      n/a
00:07:27.098  
00:07:27.098  Elapsed time =    0.022 seconds
00:07:27.098   04:57:40 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:07:27.098  
00:07:27.098  
00:07:27.098       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.098       http://cunit.sourceforge.net/
00:07:27.098  
00:07:27.098  
00:07:27.098  Suite: raid_sb
00:07:27.098    Test: test_raid_bdev_write_superblock ...passed
00:07:27.098    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:27.098    Test: test_raid_bdev_parse_superblock ...[2024-11-20 04:57:40.918981] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:27.098  passed
00:07:27.098  Suite: raid_sb_md
00:07:27.098    Test: test_raid_bdev_write_superblock ...passed
00:07:27.098    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:27.098    Test: test_raid_bdev_parse_superblock ...[2024-11-20 04:57:40.919699] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:27.098  passed
00:07:27.098  Suite: raid_sb_md_interleaved
00:07:27.098    Test: test_raid_bdev_write_superblock ...passed
00:07:27.098    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:07:27.098    Test: test_raid_bdev_parse_superblock ...passed
00:07:27.098  
00:07:27.098  [2024-11-20 04:57:40.920484] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 165:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:07:27.098  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.098                suites      3      3    n/a      0        0
00:07:27.098                 tests      9      9      9      0        0
00:07:27.098               asserts    139    139    139      0      n/a
00:07:27.098  
00:07:27.098  Elapsed time =    0.002 seconds
00:07:27.098   04:57:40 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:07:27.098  
00:07:27.098  
00:07:27.098       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.098       http://cunit.sourceforge.net/
00:07:27.098  
00:07:27.098  
00:07:27.098  Suite: concat
00:07:27.098    Test: test_concat_start ...passed
00:07:27.098    Test: test_concat_rw ...passed
00:07:27.098    Test: test_concat_null_payload ...passed
00:07:27.098  
00:07:27.098  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.098                suites      1      1    n/a      0        0
00:07:27.098                 tests      3      3      3      0        0
00:07:27.098               asserts   8460   8460   8460      0      n/a
00:07:27.098  
00:07:27.098  Elapsed time =    0.005 seconds
00:07:27.098   04:57:40 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut
00:07:27.098  
00:07:27.098  
00:07:27.098       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.098       http://cunit.sourceforge.net/
00:07:27.098  
00:07:27.098  
00:07:27.098  Suite: raid0
00:07:27.098    Test: test_write_io ...passed
00:07:27.098    Test: test_read_io ...passed
00:07:27.098    Test: test_unmap_io ...passed
00:07:27.098    Test: test_io_failure ...passed
00:07:27.098  Suite: raid0_dif
00:07:27.098    Test: test_write_io ...passed
00:07:27.098    Test: test_read_io ...passed
00:07:27.358    Test: test_unmap_io ...passed
00:07:27.358    Test: test_io_failure ...passed
00:07:27.358  
00:07:27.358  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.358                suites      2      2    n/a      0        0
00:07:27.358                 tests      8      8      8      0        0
00:07:27.358               asserts 368291 368291 368291      0      n/a
00:07:27.358  
00:07:27.358  Elapsed time =    0.140 seconds
00:07:27.358   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:07:27.358  
00:07:27.358  
00:07:27.358       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.358       http://cunit.sourceforge.net/
00:07:27.358  
00:07:27.358  
00:07:27.358  Suite: raid1
00:07:27.358    Test: test_raid1_start ...passed
00:07:27.358    Test: test_raid1_read_balancing ...passed
00:07:27.358    Test: test_raid1_write_error ...passed
00:07:27.358    Test: test_raid1_read_error ...passed
00:07:27.358  
00:07:27.358  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.358                suites      1      1    n/a      0        0
00:07:27.358                 tests      4      4      4      0        0
00:07:27.358               asserts   4374   4374   4374      0      n/a
00:07:27.358  
00:07:27.358  Elapsed time =    0.004 seconds
00:07:27.358   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:07:27.358  
00:07:27.358  
00:07:27.358       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.358       http://cunit.sourceforge.net/
00:07:27.358  
00:07:27.358  
00:07:27.358  Suite: zone
00:07:27.358    Test: test_zone_get_operation ...passed
00:07:27.358    Test: test_bdev_zone_get_info ...passed
00:07:27.358    Test: test_bdev_zone_management ...passed
00:07:27.358    Test: test_bdev_zone_append ...passed
00:07:27.358    Test: test_bdev_zone_append_with_md ...passed
00:07:27.358    Test: test_bdev_zone_appendv ...passed
00:07:27.358    Test: test_bdev_zone_appendv_with_md ...passed
00:07:27.358    Test: test_bdev_io_get_append_location ...passed
00:07:27.358  
00:07:27.358  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.358                suites      1      1    n/a      0        0
00:07:27.358                 tests      8      8      8      0        0
00:07:27.358               asserts     94     94     94      0      n/a
00:07:27.358  
00:07:27.358  Elapsed time =    0.000 seconds
00:07:27.358   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:07:27.358  
00:07:27.358  
00:07:27.358       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.358       http://cunit.sourceforge.net/
00:07:27.358  
00:07:27.358  
00:07:27.358  Suite: gpt_parse
00:07:27.358    Test: test_parse_mbr_and_primary ...[2024-11-20 04:57:41.235015] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:27.358  [2024-11-20 04:57:41.235317] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:27.358  [2024-11-20 04:57:41.235406] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:27.358  [2024-11-20 04:57:41.235499] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:27.358  [2024-11-20 04:57:41.235546] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:27.358  [2024-11-20 04:57:41.235641] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:27.358  passed
00:07:27.358    Test: test_parse_secondary ...[2024-11-20 04:57:41.236493] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:07:27.358  [2024-11-20 04:57:41.236555] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:07:27.358  [2024-11-20 04:57:41.236600] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:07:27.358  [2024-11-20 04:57:41.236643] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:07:27.358  passed
00:07:27.358    Test: test_check_mbr ...[2024-11-20 04:57:41.237516] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:27.358  [2024-11-20 04:57:41.237574] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:07:27.358  passed
00:07:27.358    Test: test_read_header ...[2024-11-20 04:57:41.237646] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:07:27.359  [2024-11-20 04:57:41.237745] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 177:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:07:27.359  [2024-11-20 04:57:41.237829] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:07:27.359  [2024-11-20 04:57:41.237878] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 191:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:07:27.359  [2024-11-20 04:57:41.237922] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 135:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:07:27.359  passed
00:07:27.359    Test: test_read_partitions ...[2024-11-20 04:57:41.237965] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:07:27.359  [2024-11-20 04:57:41.238031] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  88:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:07:27.359  [2024-11-20 04:57:41.238089] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  95:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:07:27.359  [2024-11-20 04:57:41.238152] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:07:27.359  [2024-11-20 04:57:41.238190] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:07:27.359  [2024-11-20 04:57:41.238617] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:07:27.359  passed
00:07:27.359  
00:07:27.359  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.359                suites      1      1    n/a      0        0
00:07:27.359                 tests      5      5      5      0        0
00:07:27.359               asserts     33     33     33      0      n/a
00:07:27.359  
00:07:27.359  Elapsed time =    0.004 seconds
00:07:27.359   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:07:27.359  
00:07:27.359  
00:07:27.359       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.359       http://cunit.sourceforge.net/
00:07:27.359  
00:07:27.359  
00:07:27.359  Suite: bdev_part
00:07:27.359    Test: part_test ...[2024-11-20 04:57:41.279443] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4700:bdev_name_add: *ERROR*: Bdev name 7a5f5b38-2d94-5072-98c5-aa254c9ada85 already exists
00:07:27.359  [2024-11-20 04:57:41.279750] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:7a5f5b38-2d94-5072-98c5-aa254c9ada85 alias for bdev test1
00:07:27.359  passed
00:07:27.359    Test: part_free_test ...passed
00:07:27.619    Test: part_get_io_channel_test ...passed
00:07:27.619    Test: part_construct_ext ...passed
00:07:27.619  
00:07:27.619  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.619                suites      1      1    n/a      0        0
00:07:27.619                 tests      4      4      4      0        0
00:07:27.619               asserts     48     48     48      0      n/a
00:07:27.619  
00:07:27.619  Elapsed time =    0.051 seconds
00:07:27.619   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:07:27.619  
00:07:27.619  
00:07:27.619       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.619       http://cunit.sourceforge.net/
00:07:27.619  
00:07:27.619  
00:07:27.619  Suite: scsi_nvme_suite
00:07:27.619    Test: scsi_nvme_translate_test ...passed
00:07:27.619  
00:07:27.619  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.619                suites      1      1    n/a      0        0
00:07:27.619                 tests      1      1      1      0        0
00:07:27.619               asserts    104    104    104      0      n/a
00:07:27.619  
00:07:27.619  Elapsed time =    0.000 seconds
00:07:27.619   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:07:27.619  
00:07:27.619  
00:07:27.619       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.619       http://cunit.sourceforge.net/
00:07:27.619  
00:07:27.619  
00:07:27.619  Suite: lvol
00:07:27.619    Test: ut_lvs_init ...[2024-11-20 04:57:41.399606] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:07:27.619  [2024-11-20 04:57:41.400027] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_init ...passed
00:07:27.619    Test: ut_lvol_snapshot ...passed
00:07:27.619    Test: ut_lvol_clone ...passed
00:07:27.619    Test: ut_lvs_destroy ...passed
00:07:27.619    Test: ut_lvs_unload ...passed
00:07:27.619    Test: ut_lvol_resize ...[2024-11-20 04:57:41.401821] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_set_read_only ...passed
00:07:27.619    Test: ut_lvol_hotremove ...passed
00:07:27.619    Test: ut_vbdev_lvol_get_io_channel ...passed
00:07:27.619    Test: ut_vbdev_lvol_io_type_supported ...passed
00:07:27.619    Test: ut_lvol_read_write ...passed
00:07:27.619    Test: ut_vbdev_lvol_submit_request ...passed
00:07:27.619    Test: ut_lvol_examine_config ...passed
00:07:27.619    Test: ut_lvol_examine_disk ...[2024-11-20 04:57:41.402633] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_rename ...[2024-11-20 04:57:41.403755] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:07:27.619  [2024-11-20 04:57:41.403917] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:07:27.619  passed
00:07:27.619    Test: ut_bdev_finish ...passed
00:07:27.619    Test: ut_lvs_rename ...passed
00:07:27.619    Test: ut_lvol_seek ...passed
00:07:27.619    Test: ut_esnap_dev_create ...[2024-11-20 04:57:41.404798] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:07:27.619  [2024-11-20 04:57:41.404906] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:07:27.619  [2024-11-20 04:57:41.404967] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_esnap_clone_bad_args ...[2024-11-20 04:57:41.405162] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:07:27.619  [2024-11-20 04:57:41.405209] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_shallow_copy ...[2024-11-20 04:57:41.405683] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:27.619  [2024-11-20 04:57:41.405740] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL
00:07:27.619  passed
00:07:27.619    Test: ut_lvol_set_external_parent ...[2024-11-20 04:57:41.405901] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:07:27.619  passed
00:07:27.619  
00:07:27.619  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.619                suites      1      1    n/a      0        0
00:07:27.619                 tests     23     23     23      0        0
00:07:27.619               asserts    770    770    770      0      n/a
00:07:27.619  
00:07:27.619  Elapsed time =    0.007 seconds
00:07:27.619   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:07:27.619  
00:07:27.619  
00:07:27.619       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.619       http://cunit.sourceforge.net/
00:07:27.619  
00:07:27.619  
00:07:27.619  Suite: zone_block
00:07:27.619    Test: test_zone_block_create ...passed
00:07:27.619    Test: test_zone_block_create_invalid ...[2024-11-20 04:57:41.454717] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:07:27.619  [2024-11-20 04:57:41.454974] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-20 04:57:41.455118] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:07:27.619  [2024-11-20 04:57:41.455178] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-11-20 04:57:41.455303] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:07:27.619  [2024-11-20 04:57:41.455340] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-11-20 04:57:41.455422] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:07:27.619  [2024-11-20 04:57:41.455473] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed
00:07:27.619    Test: test_get_zone_info ...[2024-11-20 04:57:41.455874] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.619  [2024-11-20 04:57:41.455941] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.455987] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_supported_io_types ...passed
00:07:27.620    Test: test_reset_zone ...[2024-11-20 04:57:41.456616] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.456678] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_open_zone ...[2024-11-20 04:57:41.456998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.457609] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.457697] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_zone_write ...[2024-11-20 04:57:41.458067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:27.620  [2024-11-20 04:57:41.458123] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.458177] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:27.620  [2024-11-20 04:57:41.458223] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.462226] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:07:27.620  [2024-11-20 04:57:41.462305] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.462374] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 401:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:07:27.620  [2024-11-20 04:57:41.462402] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.467008] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:27.620  [2024-11-20 04:57:41.467091] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_zone_read ...[2024-11-20 04:57:41.467494] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:07:27.620  [2024-11-20 04:57:41.467548] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.467624] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:07:27.620  [2024-11-20 04:57:41.467653] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.468070] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:07:27.620  [2024-11-20 04:57:41.468132] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_close_zone ...[2024-11-20 04:57:41.468457] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.468531] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.468734] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.468791] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_finish_zone ...[2024-11-20 04:57:41.469306] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620    Test: test_append_zone ...[2024-11-20 04:57:41.469365] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.469647] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:07:27.620  [2024-11-20 04:57:41.469706] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.469761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:07:27.620  [2024-11-20 04:57:41.469785] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  [2024-11-20 04:57:41.477866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 410:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:07:27.620  [2024-11-20 04:57:41.477929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:07:27.620  passed
00:07:27.620  
00:07:27.620  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:27.620                suites      1      1    n/a      0        0
00:07:27.620                 tests     11     11     11      0        0
00:07:27.620               asserts   3437   3437   3437      0      n/a
00:07:27.620  
00:07:27.620  Elapsed time =    0.024 seconds
00:07:27.620   04:57:41 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:07:27.620  
00:07:27.620  
00:07:27.620       CUnit - A unit testing framework for C - Version 2.1-3
00:07:27.620       http://cunit.sourceforge.net/
00:07:27.620  
00:07:27.620  
00:07:27.620  Suite: bdev
00:07:27.879    Test: basic ...[2024-11-20 04:57:41.577753] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x564fc7c4df81): Operation not permitted (rc=-1)
00:07:27.879  [2024-11-20 04:57:41.578097] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x6130000003c0 (0x564fc7c4df40): Operation not permitted (rc=-1)
00:07:27.879  [2024-11-20 04:57:41.578167] thread.c:2389:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x564fc7c4df81): Operation not permitted (rc=-1)
00:07:27.879  passed
00:07:27.879    Test: unregister_and_close ...passed
00:07:27.879    Test: unregister_and_close_different_threads ...passed
00:07:27.879    Test: basic_qos ...passed
00:07:27.879    Test: put_channel_during_reset ...passed
00:07:28.139    Test: aborted_reset ...passed
00:07:28.139    Test: aborted_reset_no_outstanding_io ...passed
00:07:28.139    Test: io_during_reset ...passed
00:07:28.139    Test: reset_completions ...passed
00:07:28.139    Test: io_during_qos_queue ...passed
00:07:28.139    Test: io_during_qos_reset ...passed
00:07:28.139    Test: enomem ...passed
00:07:28.139    Test: enomem_multi_bdev ...passed
00:07:28.398    Test: enomem_multi_bdev_unregister ...passed
00:07:28.398    Test: enomem_multi_io_target ...passed
00:07:28.398    Test: qos_dynamic_enable ...passed
00:07:28.398    Test: bdev_histograms_mt ...passed
00:07:28.398    Test: bdev_set_io_timeout_mt ...[2024-11-20 04:57:42.255387] thread.c: 484:spdk_thread_lib_fini: *ERROR*: io_device 0x6130000003c0 not unregistered
00:07:28.398  passed
00:07:28.398    Test: lock_lba_range_then_submit_io ...[2024-11-20 04:57:42.271384] thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x564fc7c4df00 already registered (old:0x6130000003c0 new:0x613000000c80)
00:07:28.398  passed
00:07:28.398    Test: unregister_during_reset ...passed
00:07:28.398    Test: event_notify_and_close ...passed
00:07:28.657    Test: unregister_and_qos_poller ...passed
00:07:28.657    Test: reset_start_complete_race ...passed
00:07:28.657  Suite: bdev_wrong_thread
00:07:28.657    Test: spdk_bdev_register_wt ...[2024-11-20 04:57:42.414622] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8633:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x619000002880 (0x619000002880)
00:07:28.657  passed
00:07:28.657    Test: spdk_bdev_examine_wt ...[2024-11-20 04:57:42.415098] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 832:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x619000002880 (0x619000002880)
00:07:28.657  passed
00:07:28.657  
00:07:28.657  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:28.657                suites      2      2    n/a      0        0
00:07:28.657                 tests     25     25     25      0        0
00:07:28.657               asserts    637    637    637      0      n/a
00:07:28.657  
00:07:28.657  Elapsed time =    0.864 seconds
00:07:28.657  
00:07:28.657  real	0m3.089s
00:07:28.657  user	0m1.526s
00:07:28.657  sys	0m1.566s
00:07:28.657   04:57:42 unittest.unittest_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:28.657   04:57:42 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x
00:07:28.657  ************************************
00:07:28.657  END TEST unittest_bdev
00:07:28.657  ************************************
00:07:28.657   04:57:42 unittest -- unit/unittest.sh@197 -- # [[ n == y ]]
00:07:28.657   04:57:42 unittest -- unit/unittest.sh@202 -- # [[ n == y ]]
00:07:28.657   04:57:42 unittest -- unit/unittest.sh@207 -- # [[ n == y ]]
00:07:28.657   04:57:42 unittest -- unit/unittest.sh@211 -- # [[ n == y ]]
00:07:28.657   04:57:42 unittest -- unit/unittest.sh@215 -- # run_test unittest_blob_blobfs unittest_blob
00:07:28.657   04:57:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:28.657   04:57:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:28.657   04:57:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:28.657  ************************************
00:07:28.657  START TEST unittest_blob_blobfs
00:07:28.657  ************************************
00:07:28.657   04:57:42 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1129 -- # unittest_blob
00:07:28.657   04:57:42 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:07:28.657   04:57:42 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:07:28.657  
00:07:28.657  
00:07:28.657       CUnit - A unit testing framework for C - Version 2.1-3
00:07:28.657       http://cunit.sourceforge.net/
00:07:28.657  
00:07:28.657  
00:07:28.657  Suite: blob_nocopy_noextent
00:07:28.657    Test: blob_init ...[2024-11-20 04:57:42.522325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:28.657  passed
00:07:28.657    Test: blob_thin_provision ...passed
00:07:28.657    Test: blob_read_only ...passed
00:07:28.657    Test: bs_load ...[2024-11-20 04:57:42.599003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:28.657  passed
00:07:28.916    Test: bs_load_custom_cluster_size ...passed
00:07:28.916    Test: bs_load_after_failed_grow ...passed
00:07:28.916    Test: bs_cluster_sz ...[2024-11-20 04:57:42.627158] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:28.916  [2024-11-20 04:57:42.627710] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:28.916  [2024-11-20 04:57:42.627913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:28.916  passed
00:07:28.916    Test: bs_resize_md ...passed
00:07:28.916    Test: bs_destroy ...passed
00:07:28.916    Test: bs_type ...passed
00:07:28.916    Test: bs_super_block ...passed
00:07:28.916    Test: bs_test_recover_cluster_count ...passed
00:07:28.916    Test: bs_grow_live ...passed
00:07:28.916    Test: bs_grow_live_no_space ...passed
00:07:28.916    Test: bs_test_grow ...passed
00:07:28.916    Test: blob_serialize_test ...passed
00:07:28.916    Test: super_block_crc ...passed
00:07:28.916    Test: blob_thin_prov_write_count_io ...passed
00:07:28.916    Test: blob_thin_prov_unmap_cluster ...passed
00:07:28.916    Test: bs_load_iter_test ...passed
00:07:28.916    Test: blob_relations ...[2024-11-20 04:57:42.837990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.838113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  [2024-11-20 04:57:42.839210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.839308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  passed
00:07:28.916    Test: blob_relations2 ...[2024-11-20 04:57:42.853333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.853435] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  [2024-11-20 04:57:42.853491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.853528] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  [2024-11-20 04:57:42.855154] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.855235] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  [2024-11-20 04:57:42.855837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:28.916  [2024-11-20 04:57:42.855948] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:28.916  passed
00:07:28.916    Test: blob_relations3 ...passed
00:07:29.175    Test: blobstore_clean_power_failure ...passed
00:07:29.175    Test: blob_delete_snapshot_power_failure ...[2024-11-20 04:57:43.005675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:29.175  [2024-11-20 04:57:43.017864] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:29.175  [2024-11-20 04:57:43.017962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:29.175  [2024-11-20 04:57:43.018021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.175  [2024-11-20 04:57:43.029737] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:29.175  [2024-11-20 04:57:43.029832] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:29.175  [2024-11-20 04:57:43.029884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:29.175  [2024-11-20 04:57:43.029954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.175  [2024-11-20 04:57:43.041712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:29.175  [2024-11-20 04:57:43.041853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.175  [2024-11-20 04:57:43.053547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:29.175  [2024-11-20 04:57:43.053695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.175  [2024-11-20 04:57:43.065507] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:29.175  [2024-11-20 04:57:43.065659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:29.175  passed
00:07:29.175    Test: blob_create_snapshot_power_failure ...[2024-11-20 04:57:43.099633] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:29.175  [2024-11-20 04:57:43.121639] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:29.435  [2024-11-20 04:57:43.133506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:29.435  passed
00:07:29.435    Test: blob_io_unit ...passed
00:07:29.435    Test: blob_io_unit_compatibility ...passed
00:07:29.435    Test: blob_ext_md_pages ...passed
00:07:29.435    Test: blob_esnap_io_4096_4096 ...passed
00:07:29.435    Test: blob_esnap_io_512_512 ...passed
00:07:29.435    Test: blob_esnap_io_4096_512 ...passed
00:07:29.435    Test: blob_esnap_io_512_4096 ...passed
00:07:29.435    Test: blob_esnap_clone_resize ...passed
00:07:29.435  Suite: blob_bs_nocopy_noextent
00:07:29.435    Test: blob_open ...passed
00:07:29.435    Test: blob_create ...[2024-11-20 04:57:43.384009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:29.694  passed
00:07:29.694    Test: blob_create_loop ...passed
00:07:29.694    Test: blob_create_fail ...[2024-11-20 04:57:43.476742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:29.694  passed
00:07:29.694    Test: blob_create_internal ...passed
00:07:29.694    Test: blob_create_zero_extent ...passed
00:07:29.694    Test: blob_snapshot ...passed
00:07:29.694    Test: blob_clone ...passed
00:07:29.694    Test: blob_inflate ...[2024-11-20 04:57:43.642901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:29.953  passed
00:07:29.953    Test: blob_delete ...passed
00:07:29.953    Test: blob_resize_test ...[2024-11-20 04:57:43.702643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:29.953  passed
00:07:29.953    Test: blob_resize_thin_test ...passed
00:07:29.953    Test: channel_ops ...passed
00:07:29.953    Test: blob_super ...passed
00:07:29.953    Test: blob_rw_verify_iov ...passed
00:07:29.953    Test: blob_unmap ...passed
00:07:29.953    Test: blob_iter ...passed
00:07:30.212    Test: blob_parse_md ...passed
00:07:30.212    Test: bs_load_pending_removal ...passed
00:07:30.212    Test: bs_unload ...[2024-11-20 04:57:43.981321] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:30.212  passed
00:07:30.212    Test: bs_usable_clusters ...passed
00:07:30.212    Test: blob_crc ...[2024-11-20 04:57:44.049049] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:30.212  [2024-11-20 04:57:44.049194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:30.212  passed
00:07:30.212    Test: blob_flags ...passed
00:07:30.212    Test: bs_version ...passed
00:07:30.212    Test: blob_set_xattrs_test ...[2024-11-20 04:57:44.140494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:30.212  [2024-11-20 04:57:44.140626] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:30.212  passed
00:07:30.472    Test: blob_thin_prov_alloc ...passed
00:07:30.472    Test: blob_insert_cluster_msg_test ...passed
00:07:30.472    Test: blob_thin_prov_rw ...passed
00:07:30.472    Test: blob_thin_prov_rle ...passed
00:07:30.472    Test: blob_thin_prov_rw_iov ...passed
00:07:30.731    Test: blob_snapshot_rw ...passed
00:07:30.731    Test: blob_snapshot_rw_iov ...passed
00:07:30.731    Test: blob_inflate_rw ...passed
00:07:30.990    Test: blob_snapshot_freeze_io ...passed
00:07:30.990    Test: blob_operation_split_rw ...passed
00:07:31.248    Test: blob_operation_split_rw_iov ...passed
00:07:31.248    Test: blob_simultaneous_operations ...[2024-11-20 04:57:44.992790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:31.248  [2024-11-20 04:57:44.992927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:31.248  [2024-11-20 04:57:44.994251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:31.248  [2024-11-20 04:57:44.994317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:31.248  [2024-11-20 04:57:45.006762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:31.248  [2024-11-20 04:57:45.006823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:31.248  [2024-11-20 04:57:45.006990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:31.248  [2024-11-20 04:57:45.007043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:31.248  passed
00:07:31.248    Test: blob_persist_test ...passed
00:07:31.248    Test: blob_decouple_snapshot ...passed
00:07:31.248    Test: blob_seek_io_unit ...passed
00:07:31.248    Test: blob_nested_freezes ...passed
00:07:31.519    Test: blob_clone_resize ...passed
00:07:31.519    Test: blob_shallow_copy ...[2024-11-20 04:57:45.265694] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:31.519  [2024-11-20 04:57:45.266102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:31.519  [2024-11-20 04:57:45.266390] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:31.519  passed
00:07:31.519  Suite: blob_blob_nocopy_noextent
00:07:31.519    Test: blob_write ...passed
00:07:31.519    Test: blob_read ...passed
00:07:31.519    Test: blob_rw_verify ...passed
00:07:31.519    Test: blob_rw_verify_iov_nomem ...passed
00:07:31.519    Test: blob_rw_iov_read_only ...passed
00:07:31.519    Test: blob_xattr ...passed
00:07:31.795    Test: blob_dirty_shutdown ...passed
00:07:31.795    Test: blob_is_degraded ...passed
00:07:31.795  Suite: blob_esnap_bs_nocopy_noextent
00:07:31.795    Test: blob_esnap_create ...passed
00:07:31.795    Test: blob_esnap_thread_add_remove ...passed
00:07:31.795    Test: blob_esnap_clone_snapshot ...passed
00:07:31.795    Test: blob_esnap_clone_inflate ...passed
00:07:31.795    Test: blob_esnap_clone_decouple ...passed
00:07:31.795    Test: blob_esnap_clone_reload ...passed
00:07:31.795    Test: blob_esnap_hotplug ...passed
00:07:32.058    Test: blob_set_parent ...[2024-11-20 04:57:45.762138] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:32.058  [2024-11-20 04:57:45.762254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:32.058  [2024-11-20 04:57:45.762417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:32.058  [2024-11-20 04:57:45.762471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:32.058  [2024-11-20 04:57:45.763092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:32.058  passed
00:07:32.058    Test: blob_set_external_parent ...[2024-11-20 04:57:45.794295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:32.058  [2024-11-20 04:57:45.794413] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:32.058  [2024-11-20 04:57:45.794493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:32.058  [2024-11-20 04:57:45.795115] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:32.058  passed
00:07:32.058  Suite: blob_nocopy_extent
00:07:32.058    Test: blob_init ...[2024-11-20 04:57:45.806780] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:32.058  passed
00:07:32.058    Test: blob_thin_provision ...passed
00:07:32.058    Test: blob_read_only ...passed
00:07:32.058    Test: bs_load ...[2024-11-20 04:57:45.851861] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:32.058  passed
00:07:32.058    Test: bs_load_custom_cluster_size ...passed
00:07:32.058    Test: bs_load_after_failed_grow ...passed
00:07:32.058    Test: bs_cluster_sz ...[2024-11-20 04:57:45.879302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:32.058  [2024-11-20 04:57:45.879732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:32.058  [2024-11-20 04:57:45.879840] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:32.058  passed
00:07:32.058    Test: bs_resize_md ...passed
00:07:32.058    Test: bs_destroy ...passed
00:07:32.058    Test: bs_type ...passed
00:07:32.058    Test: bs_super_block ...passed
00:07:32.058    Test: bs_test_recover_cluster_count ...passed
00:07:32.058    Test: bs_grow_live ...passed
00:07:32.058    Test: bs_grow_live_no_space ...passed
00:07:32.058    Test: bs_test_grow ...passed
00:07:32.058    Test: blob_serialize_test ...passed
00:07:32.058    Test: super_block_crc ...passed
00:07:32.058    Test: blob_thin_prov_write_count_io ...passed
00:07:32.317    Test: blob_thin_prov_unmap_cluster ...passed
00:07:32.317    Test: bs_load_iter_test ...passed
00:07:32.317    Test: blob_relations ...[2024-11-20 04:57:46.067742] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.067887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  [2024-11-20 04:57:46.069028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.069121] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  passed
00:07:32.317    Test: blob_relations2 ...[2024-11-20 04:57:46.084052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.084160] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  [2024-11-20 04:57:46.084216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.084260] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  [2024-11-20 04:57:46.085883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.085987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  [2024-11-20 04:57:46.086533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:32.317  [2024-11-20 04:57:46.086627] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  passed
00:07:32.317    Test: blob_relations3 ...passed
00:07:32.317    Test: blobstore_clean_power_failure ...passed
00:07:32.317    Test: blob_delete_snapshot_power_failure ...[2024-11-20 04:57:46.232312] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:32.317  [2024-11-20 04:57:46.244057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:32.317  [2024-11-20 04:57:46.255912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:32.317  [2024-11-20 04:57:46.256030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:32.317  [2024-11-20 04:57:46.256070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.317  [2024-11-20 04:57:46.267789] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:32.317  [2024-11-20 04:57:46.267869] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:32.317  [2024-11-20 04:57:46.267913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:32.317  [2024-11-20 04:57:46.267967] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.576  [2024-11-20 04:57:46.280015] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:32.576  [2024-11-20 04:57:46.280110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:32.576  [2024-11-20 04:57:46.280141] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:32.576  [2024-11-20 04:57:46.280190] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.576  [2024-11-20 04:57:46.292102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:32.576  [2024-11-20 04:57:46.292222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.576  [2024-11-20 04:57:46.304139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:32.576  [2024-11-20 04:57:46.304269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.576  [2024-11-20 04:57:46.316231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:32.576  [2024-11-20 04:57:46.316342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:32.576  passed
00:07:32.576    Test: blob_create_snapshot_power_failure ...[2024-11-20 04:57:46.350847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:32.576  [2024-11-20 04:57:46.362089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:32.576  [2024-11-20 04:57:46.384833] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:32.576  [2024-11-20 04:57:46.397461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:32.576  passed
00:07:32.576    Test: blob_io_unit ...passed
00:07:32.576    Test: blob_io_unit_compatibility ...passed
00:07:32.576    Test: blob_ext_md_pages ...passed
00:07:32.576    Test: blob_esnap_io_4096_4096 ...passed
00:07:32.576    Test: blob_esnap_io_512_512 ...passed
00:07:32.835    Test: blob_esnap_io_4096_512 ...passed
00:07:32.835    Test: blob_esnap_io_512_4096 ...passed
00:07:32.835    Test: blob_esnap_clone_resize ...passed
00:07:32.835  Suite: blob_bs_nocopy_extent
00:07:32.835    Test: blob_open ...passed
00:07:32.835    Test: blob_create ...[2024-11-20 04:57:46.644728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:32.835  passed
00:07:32.835    Test: blob_create_loop ...passed
00:07:32.835    Test: blob_create_fail ...[2024-11-20 04:57:46.742702] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:32.835  passed
00:07:32.835    Test: blob_create_internal ...passed
00:07:33.094    Test: blob_create_zero_extent ...passed
00:07:33.094    Test: blob_snapshot ...passed
00:07:33.094    Test: blob_clone ...passed
00:07:33.094    Test: blob_inflate ...[2024-11-20 04:57:46.913957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:33.094  passed
00:07:33.094    Test: blob_delete ...passed
00:07:33.094    Test: blob_resize_test ...[2024-11-20 04:57:46.974624] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:33.094  passed
00:07:33.094    Test: blob_resize_thin_test ...passed
00:07:33.094    Test: channel_ops ...passed
00:07:33.353    Test: blob_super ...passed
00:07:33.353    Test: blob_rw_verify_iov ...passed
00:07:33.353    Test: blob_unmap ...passed
00:07:33.353    Test: blob_iter ...passed
00:07:33.353    Test: blob_parse_md ...passed
00:07:33.353    Test: bs_load_pending_removal ...passed
00:07:33.353    Test: bs_unload ...[2024-11-20 04:57:47.254499] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:33.353  passed
00:07:33.353    Test: bs_usable_clusters ...passed
00:07:33.612    Test: blob_crc ...[2024-11-20 04:57:47.315245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:33.612  [2024-11-20 04:57:47.315403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:33.612  passed
00:07:33.612    Test: blob_flags ...passed
00:07:33.612    Test: bs_version ...passed
00:07:33.612    Test: blob_set_xattrs_test ...[2024-11-20 04:57:47.406516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:33.612  [2024-11-20 04:57:47.406660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:33.612  passed
00:07:33.612    Test: blob_thin_prov_alloc ...passed
00:07:33.612    Test: blob_insert_cluster_msg_test ...passed
00:07:33.871    Test: blob_thin_prov_rw ...passed
00:07:33.871    Test: blob_thin_prov_rle ...passed
00:07:33.871    Test: blob_thin_prov_rw_iov ...passed
00:07:33.871    Test: blob_snapshot_rw ...passed
00:07:33.871    Test: blob_snapshot_rw_iov ...passed
00:07:34.130    Test: blob_inflate_rw ...passed
00:07:34.130    Test: blob_snapshot_freeze_io ...passed
00:07:34.130    Test: blob_operation_split_rw ...passed
00:07:34.389    Test: blob_operation_split_rw_iov ...passed
00:07:34.389    Test: blob_simultaneous_operations ...[2024-11-20 04:57:48.242910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:34.389  [2024-11-20 04:57:48.243035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:34.389  [2024-11-20 04:57:48.244221] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:34.389  [2024-11-20 04:57:48.244269] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:34.389  [2024-11-20 04:57:48.254091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:34.389  [2024-11-20 04:57:48.254149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:34.389  [2024-11-20 04:57:48.254266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:34.389  [2024-11-20 04:57:48.254289] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:34.389  passed
00:07:34.389    Test: blob_persist_test ...passed
00:07:34.648    Test: blob_decouple_snapshot ...passed
00:07:34.648    Test: blob_seek_io_unit ...passed
00:07:34.648    Test: blob_nested_freezes ...passed
00:07:34.648    Test: blob_clone_resize ...passed
00:07:34.648    Test: blob_shallow_copy ...[2024-11-20 04:57:48.495875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:34.648  [2024-11-20 04:57:48.496194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:34.648  [2024-11-20 04:57:48.496444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:34.648  passed
00:07:34.648  Suite: blob_blob_nocopy_extent
00:07:34.648    Test: blob_write ...passed
00:07:34.648    Test: blob_read ...passed
00:07:34.648    Test: blob_rw_verify ...passed
00:07:34.907    Test: blob_rw_verify_iov_nomem ...passed
00:07:34.907    Test: blob_rw_iov_read_only ...passed
00:07:34.907    Test: blob_xattr ...passed
00:07:34.907    Test: blob_dirty_shutdown ...passed
00:07:34.907    Test: blob_is_degraded ...passed
00:07:34.907  Suite: blob_esnap_bs_nocopy_extent
00:07:34.907    Test: blob_esnap_create ...passed
00:07:34.907    Test: blob_esnap_thread_add_remove ...passed
00:07:34.907    Test: blob_esnap_clone_snapshot ...passed
00:07:35.165    Test: blob_esnap_clone_inflate ...passed
00:07:35.165    Test: blob_esnap_clone_decouple ...passed
00:07:35.165    Test: blob_esnap_clone_reload ...passed
00:07:35.166    Test: blob_esnap_hotplug ...passed
00:07:35.166    Test: blob_set_parent ...[2024-11-20 04:57:48.986322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:35.166  [2024-11-20 04:57:48.986417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:35.166  [2024-11-20 04:57:48.986544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:35.166  [2024-11-20 04:57:48.986578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:35.166  [2024-11-20 04:57:48.987030] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:35.166  passed
00:07:35.166    Test: blob_set_external_parent ...[2024-11-20 04:57:49.018242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:35.166  [2024-11-20 04:57:49.018361] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:35.166  [2024-11-20 04:57:49.018401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:35.166  [2024-11-20 04:57:49.018785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:35.166  passed
00:07:35.166  Suite: blob_copy_noextent
00:07:35.166    Test: blob_init ...[2024-11-20 04:57:49.029681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:35.166  passed
00:07:35.166    Test: blob_thin_provision ...passed
00:07:35.166    Test: blob_read_only ...passed
00:07:35.166    Test: bs_load ...[2024-11-20 04:57:49.072874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:35.166  passed
00:07:35.166    Test: bs_load_custom_cluster_size ...passed
00:07:35.166    Test: bs_load_after_failed_grow ...passed
00:07:35.166    Test: bs_cluster_sz ...[2024-11-20 04:57:49.096178] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:35.166  [2024-11-20 04:57:49.096386] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:35.166  [2024-11-20 04:57:49.096440] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:35.166  passed
00:07:35.166    Test: bs_resize_md ...passed
00:07:35.423    Test: bs_destroy ...passed
00:07:35.423    Test: bs_type ...passed
00:07:35.423    Test: bs_super_block ...passed
00:07:35.423    Test: bs_test_recover_cluster_count ...passed
00:07:35.423    Test: bs_grow_live ...passed
00:07:35.423    Test: bs_grow_live_no_space ...passed
00:07:35.423    Test: bs_test_grow ...passed
00:07:35.423    Test: blob_serialize_test ...passed
00:07:35.423    Test: super_block_crc ...passed
00:07:35.423    Test: blob_thin_prov_write_count_io ...passed
00:07:35.423    Test: blob_thin_prov_unmap_cluster ...passed
00:07:35.423    Test: bs_load_iter_test ...passed
00:07:35.423    Test: blob_relations ...[2024-11-20 04:57:49.273904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.274008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  [2024-11-20 04:57:49.274587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.274634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  passed
00:07:35.423    Test: blob_relations2 ...[2024-11-20 04:57:49.287050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.287122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  [2024-11-20 04:57:49.287170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.287187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  [2024-11-20 04:57:49.288171] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.288216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  [2024-11-20 04:57:49.288535] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:35.423  [2024-11-20 04:57:49.288586] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.423  passed
00:07:35.423    Test: blob_relations3 ...passed
00:07:35.682    Test: blobstore_clean_power_failure ...passed
00:07:35.682    Test: blob_delete_snapshot_power_failure ...[2024-11-20 04:57:49.428646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:35.682  [2024-11-20 04:57:49.439852] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:35.682  [2024-11-20 04:57:49.439930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:35.682  [2024-11-20 04:57:49.439971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.682  [2024-11-20 04:57:49.451341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:35.682  [2024-11-20 04:57:49.451447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:35.682  [2024-11-20 04:57:49.451478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:35.682  [2024-11-20 04:57:49.451516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.682  [2024-11-20 04:57:49.462876] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:35.682  [2024-11-20 04:57:49.462986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.682  [2024-11-20 04:57:49.474426] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:35.682  [2024-11-20 04:57:49.474545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.682  [2024-11-20 04:57:49.485835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:35.682  [2024-11-20 04:57:49.485945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:35.682  passed
00:07:35.682    Test: blob_create_snapshot_power_failure ...[2024-11-20 04:57:49.518485] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:35.682  [2024-11-20 04:57:49.539658] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:07:35.682  [2024-11-20 04:57:49.550599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:35.682  passed
00:07:35.682    Test: blob_io_unit ...passed
00:07:35.682    Test: blob_io_unit_compatibility ...passed
00:07:35.682    Test: blob_ext_md_pages ...passed
00:07:35.941    Test: blob_esnap_io_4096_4096 ...passed
00:07:35.941    Test: blob_esnap_io_512_512 ...passed
00:07:35.941    Test: blob_esnap_io_4096_512 ...passed
00:07:35.941    Test: blob_esnap_io_512_4096 ...passed
00:07:35.941    Test: blob_esnap_clone_resize ...passed
00:07:35.941  Suite: blob_bs_copy_noextent
00:07:35.941    Test: blob_open ...passed
00:07:35.941    Test: blob_create ...[2024-11-20 04:57:49.791972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:35.941  passed
00:07:35.941    Test: blob_create_loop ...passed
00:07:35.941    Test: blob_create_fail ...[2024-11-20 04:57:49.876553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:35.941  passed
00:07:36.199    Test: blob_create_internal ...passed
00:07:36.199    Test: blob_create_zero_extent ...passed
00:07:36.199    Test: blob_snapshot ...passed
00:07:36.199    Test: blob_clone ...passed
00:07:36.199    Test: blob_inflate ...[2024-11-20 04:57:50.037431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:36.199  passed
00:07:36.199    Test: blob_delete ...passed
00:07:36.199    Test: blob_resize_test ...[2024-11-20 04:57:50.095014] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:36.199  passed
00:07:36.199    Test: blob_resize_thin_test ...passed
00:07:36.458    Test: channel_ops ...passed
00:07:36.458    Test: blob_super ...passed
00:07:36.458    Test: blob_rw_verify_iov ...passed
00:07:36.458    Test: blob_unmap ...passed
00:07:36.458    Test: blob_iter ...passed
00:07:36.458    Test: blob_parse_md ...passed
00:07:36.458    Test: bs_load_pending_removal ...passed
00:07:36.458    Test: bs_unload ...[2024-11-20 04:57:50.374769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:36.458  passed
00:07:36.717    Test: bs_usable_clusters ...passed
00:07:36.717    Test: blob_crc ...[2024-11-20 04:57:50.434533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:36.717  [2024-11-20 04:57:50.434696] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:36.717  passed
00:07:36.717    Test: blob_flags ...passed
00:07:36.717    Test: bs_version ...passed
00:07:36.717    Test: blob_set_xattrs_test ...[2024-11-20 04:57:50.525913] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:36.717  [2024-11-20 04:57:50.526041] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:36.717  passed
00:07:36.717    Test: blob_thin_prov_alloc ...passed
00:07:36.976    Test: blob_insert_cluster_msg_test ...passed
00:07:36.976    Test: blob_thin_prov_rw ...passed
00:07:36.976    Test: blob_thin_prov_rle ...passed
00:07:36.976    Test: blob_thin_prov_rw_iov ...passed
00:07:36.976    Test: blob_snapshot_rw ...passed
00:07:36.976    Test: blob_snapshot_rw_iov ...passed
00:07:37.236    Test: blob_inflate_rw ...passed
00:07:37.236    Test: blob_snapshot_freeze_io ...passed
00:07:37.494    Test: blob_operation_split_rw ...passed
00:07:37.494    Test: blob_operation_split_rw_iov ...passed
00:07:37.494    Test: blob_simultaneous_operations ...[2024-11-20 04:57:51.359182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:37.494  [2024-11-20 04:57:51.359286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.494  [2024-11-20 04:57:51.359827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:37.494  [2024-11-20 04:57:51.359883] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.494  [2024-11-20 04:57:51.362291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:37.494  [2024-11-20 04:57:51.362354] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.495  [2024-11-20 04:57:51.362458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:37.495  [2024-11-20 04:57:51.362480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:37.495  passed
00:07:37.495    Test: blob_persist_test ...passed
00:07:37.495    Test: blob_decouple_snapshot ...passed
00:07:37.754    Test: blob_seek_io_unit ...passed
00:07:37.754    Test: blob_nested_freezes ...passed
00:07:37.754    Test: blob_clone_resize ...passed
00:07:37.754    Test: blob_shallow_copy ...[2024-11-20 04:57:51.572949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:37.754  [2024-11-20 04:57:51.573273] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:37.754  [2024-11-20 04:57:51.573526] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:37.754  passed
00:07:37.754  Suite: blob_blob_copy_noextent
00:07:37.754    Test: blob_write ...passed
00:07:37.754    Test: blob_read ...passed
00:07:37.754    Test: blob_rw_verify ...passed
00:07:37.754    Test: blob_rw_verify_iov_nomem ...passed
00:07:38.013    Test: blob_rw_iov_read_only ...passed
00:07:38.013    Test: blob_xattr ...passed
00:07:38.013    Test: blob_dirty_shutdown ...passed
00:07:38.013    Test: blob_is_degraded ...passed
00:07:38.013  Suite: blob_esnap_bs_copy_noextent
00:07:38.013    Test: blob_esnap_create ...passed
00:07:38.013    Test: blob_esnap_thread_add_remove ...passed
00:07:38.013    Test: blob_esnap_clone_snapshot ...passed
00:07:38.013    Test: blob_esnap_clone_inflate ...passed
00:07:38.272    Test: blob_esnap_clone_decouple ...passed
00:07:38.272    Test: blob_esnap_clone_reload ...passed
00:07:38.272    Test: blob_esnap_hotplug ...passed
00:07:38.272    Test: blob_set_parent ...[2024-11-20 04:57:52.061396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:38.272  [2024-11-20 04:57:52.061490] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:38.272  [2024-11-20 04:57:52.061600] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:38.272  [2024-11-20 04:57:52.061646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:38.272  [2024-11-20 04:57:52.062058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:38.272  passed
00:07:38.272    Test: blob_set_external_parent ...[2024-11-20 04:57:52.092396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:38.272  [2024-11-20 04:57:52.092521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:38.272  [2024-11-20 04:57:52.092549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:38.272  [2024-11-20 04:57:52.092888] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:38.272  passed
00:07:38.272  Suite: blob_copy_extent
00:07:38.272    Test: blob_init ...[2024-11-20 04:57:52.103274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5500:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:07:38.272  passed
00:07:38.272    Test: blob_thin_provision ...passed
00:07:38.272    Test: blob_read_only ...passed
00:07:38.272    Test: bs_load ...[2024-11-20 04:57:52.144406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 974:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:07:38.272  passed
00:07:38.272    Test: bs_load_custom_cluster_size ...passed
00:07:38.272    Test: bs_load_after_failed_grow ...passed
00:07:38.272    Test: bs_cluster_sz ...[2024-11-20 04:57:52.166955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:07:38.272  [2024-11-20 04:57:52.167152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5631:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:07:38.272  [2024-11-20 04:57:52.167193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3893:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096
00:07:38.272  passed
00:07:38.272    Test: bs_resize_md ...passed
00:07:38.272    Test: bs_destroy ...passed
00:07:38.272    Test: bs_type ...passed
00:07:38.532    Test: bs_super_block ...passed
00:07:38.532    Test: bs_test_recover_cluster_count ...passed
00:07:38.532    Test: bs_grow_live ...passed
00:07:38.532    Test: bs_grow_live_no_space ...passed
00:07:38.532    Test: bs_test_grow ...passed
00:07:38.532    Test: blob_serialize_test ...passed
00:07:38.532    Test: super_block_crc ...passed
00:07:38.532    Test: blob_thin_prov_write_count_io ...passed
00:07:38.532    Test: blob_thin_prov_unmap_cluster ...passed
00:07:38.532    Test: bs_load_iter_test ...passed
00:07:38.532    Test: blob_relations ...[2024-11-20 04:57:52.330785] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.330882] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  [2024-11-20 04:57:52.331579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.331656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  passed
00:07:38.532    Test: blob_relations2 ...[2024-11-20 04:57:52.346003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.346096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  [2024-11-20 04:57:52.346124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.346140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  [2024-11-20 04:57:52.347173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.347229] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  [2024-11-20 04:57:52.347604] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:07:38.532  [2024-11-20 04:57:52.347650] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.532  passed
00:07:38.532    Test: blob_relations3 ...passed
00:07:38.532    Test: blobstore_clean_power_failure ...passed
00:07:38.790    Test: blob_delete_snapshot_power_failure ...[2024-11-20 04:57:52.488924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:38.790  [2024-11-20 04:57:52.500000] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:38.790  [2024-11-20 04:57:52.514333] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:38.790  [2024-11-20 04:57:52.514412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:38.790  [2024-11-20 04:57:52.514466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.790  [2024-11-20 04:57:52.526177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:38.790  [2024-11-20 04:57:52.526251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:38.790  [2024-11-20 04:57:52.526298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:38.791  [2024-11-20 04:57:52.526352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.791  [2024-11-20 04:57:52.538004] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:38.791  [2024-11-20 04:57:52.538078] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:07:38.791  [2024-11-20 04:57:52.538125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8311:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:07:38.791  [2024-11-20 04:57:52.538152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.791  [2024-11-20 04:57:52.549580] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8238:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:07:38.791  [2024-11-20 04:57:52.549698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.791  [2024-11-20 04:57:52.560979] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8107:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:07:38.791  [2024-11-20 04:57:52.561087] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.791  [2024-11-20 04:57:52.572672] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8051:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:07:38.791  [2024-11-20 04:57:52.572767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:38.791  passed
00:07:38.791    Test: blob_create_snapshot_power_failure ...[2024-11-20 04:57:52.606209] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:07:38.791  [2024-11-20 04:57:52.617322] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:07:38.791  [2024-11-20 04:57:52.638705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1678:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:07:38.791  [2024-11-20 04:57:52.649958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6456:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:07:38.791  passed
00:07:38.791    Test: blob_io_unit ...passed
00:07:38.791    Test: blob_io_unit_compatibility ...passed
00:07:38.791    Test: blob_ext_md_pages ...passed
00:07:38.791    Test: blob_esnap_io_4096_4096 ...passed
00:07:39.049    Test: blob_esnap_io_512_512 ...passed
00:07:39.049    Test: blob_esnap_io_4096_512 ...passed
00:07:39.049    Test: blob_esnap_io_512_4096 ...passed
00:07:39.049    Test: blob_esnap_clone_resize ...passed
00:07:39.049  Suite: blob_bs_copy_extent
00:07:39.049    Test: blob_open ...passed
00:07:39.049    Test: blob_create ...[2024-11-20 04:57:52.892956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:07:39.049  passed
00:07:39.049    Test: blob_create_loop ...passed
00:07:39.049    Test: blob_create_fail ...[2024-11-20 04:57:52.981522] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:39.049  passed
00:07:39.308    Test: blob_create_internal ...passed
00:07:39.308    Test: blob_create_zero_extent ...passed
00:07:39.308    Test: blob_snapshot ...passed
00:07:39.308    Test: blob_clone ...passed
00:07:39.308    Test: blob_inflate ...[2024-11-20 04:57:53.137706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7119:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:07:39.308  passed
00:07:39.308    Test: blob_delete ...passed
00:07:39.308    Test: blob_resize_test ...[2024-11-20 04:57:53.194180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7856:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:07:39.308  passed
00:07:39.308    Test: blob_resize_thin_test ...passed
00:07:39.566    Test: channel_ops ...passed
00:07:39.566    Test: blob_super ...passed
00:07:39.566    Test: blob_rw_verify_iov ...passed
00:07:39.566    Test: blob_unmap ...passed
00:07:39.566    Test: blob_iter ...passed
00:07:39.566    Test: blob_parse_md ...passed
00:07:39.566    Test: bs_load_pending_removal ...passed
00:07:39.566    Test: bs_unload ...[2024-11-20 04:57:53.467896] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5888:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:07:39.566  passed
00:07:39.566    Test: bs_usable_clusters ...passed
00:07:39.825    Test: blob_crc ...[2024-11-20 04:57:53.526770] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:39.825  [2024-11-20 04:57:53.526927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1687:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:07:39.825  passed
00:07:39.825    Test: blob_flags ...passed
00:07:39.825    Test: bs_version ...passed
00:07:39.825    Test: blob_set_xattrs_test ...[2024-11-20 04:57:53.617002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:39.825  [2024-11-20 04:57:53.617132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6337:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:07:39.825  passed
00:07:39.825    Test: blob_thin_prov_alloc ...passed
00:07:39.825    Test: blob_insert_cluster_msg_test ...passed
00:07:40.084    Test: blob_thin_prov_rw ...passed
00:07:40.084    Test: blob_thin_prov_rle ...passed
00:07:40.084    Test: blob_thin_prov_rw_iov ...passed
00:07:40.084    Test: blob_snapshot_rw ...passed
00:07:40.084    Test: blob_snapshot_rw_iov ...passed
00:07:40.343    Test: blob_inflate_rw ...passed
00:07:40.343    Test: blob_snapshot_freeze_io ...passed
00:07:40.343    Test: blob_operation_split_rw ...passed
00:07:40.601    Test: blob_operation_split_rw_iov ...passed
00:07:40.601    Test: blob_simultaneous_operations ...[2024-11-20 04:57:54.424567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.601  [2024-11-20 04:57:54.424659] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.601  [2024-11-20 04:57:54.425222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.601  [2024-11-20 04:57:54.425300] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.601  [2024-11-20 04:57:54.427821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.601  [2024-11-20 04:57:54.427878] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.601  [2024-11-20 04:57:54.427974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8424:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:07:40.601  [2024-11-20 04:57:54.428011] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8364:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:07:40.601  passed
00:07:40.601    Test: blob_persist_test ...passed
00:07:40.601    Test: blob_decouple_snapshot ...passed
00:07:40.601    Test: blob_seek_io_unit ...passed
00:07:40.866    Test: blob_nested_freezes ...passed
00:07:40.866    Test: blob_clone_resize ...passed
00:07:40.866    Test: blob_shallow_copy ...[2024-11-20 04:57:54.644051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7342:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:07:40.866  [2024-11-20 04:57:54.644372] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7352:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:07:40.866  [2024-11-20 04:57:54.644652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7360:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:07:40.866  passed
00:07:40.866  Suite: blob_blob_copy_extent
00:07:40.866    Test: blob_write ...passed
00:07:40.866    Test: blob_read ...passed
00:07:40.866    Test: blob_rw_verify ...passed
00:07:40.866    Test: blob_rw_verify_iov_nomem ...passed
00:07:40.866    Test: blob_rw_iov_read_only ...passed
00:07:41.126    Test: blob_xattr ...passed
00:07:41.126    Test: blob_dirty_shutdown ...passed
00:07:41.126    Test: blob_is_degraded ...passed
00:07:41.126  Suite: blob_esnap_bs_copy_extent
00:07:41.126    Test: blob_esnap_create ...passed
00:07:41.126    Test: blob_esnap_thread_add_remove ...passed
00:07:41.126    Test: blob_esnap_clone_snapshot ...passed
00:07:41.126    Test: blob_esnap_clone_inflate ...passed
00:07:41.126    Test: blob_esnap_clone_decouple ...passed
00:07:41.384    Test: blob_esnap_clone_reload ...passed
00:07:41.384    Test: blob_esnap_hotplug ...passed
00:07:41.384    Test: blob_set_parent ...[2024-11-20 04:57:55.147636] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7623:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:07:41.384  [2024-11-20 04:57:55.147757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7629:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:07:41.384  [2024-11-20 04:57:55.147881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7558:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:07:41.384  [2024-11-20 04:57:55.147927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7565:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:07:41.384  [2024-11-20 04:57:55.148493] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7604:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:41.384  passed
00:07:41.384    Test: blob_set_external_parent ...[2024-11-20 04:57:55.180992] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:07:41.384  [2024-11-20 04:57:55.181113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7806:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:07:41.384  [2024-11-20 04:57:55.181165] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7759:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:07:41.384  [2024-11-20 04:57:55.181677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7765:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:07:41.384  passed
00:07:41.384  
00:07:41.384  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.384                suites     16     16    n/a      0        0
00:07:41.384                 tests    376    376    376      0        0
00:07:41.384               asserts 144129 144129 144129      0      n/a
00:07:41.384  
00:07:41.384  Elapsed time =   12.669 seconds
00:07:41.384   04:57:55 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:07:41.384  
00:07:41.384  
00:07:41.384       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.384       http://cunit.sourceforge.net/
00:07:41.384  
00:07:41.384  
00:07:41.384  Suite: blob_bdev
00:07:41.384    Test: create_bs_dev ...passed
00:07:41.384    Test: create_bs_dev_ro ...[2024-11-20 04:57:55.287704] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 539:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:07:41.384  passed
00:07:41.384    Test: create_bs_dev_rw ...passed
00:07:41.384    Test: claim_bs_dev ...[2024-11-20 04:57:55.288247] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 350:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:07:41.384  passed
00:07:41.384    Test: claim_bs_dev_ro ...passed
00:07:41.384    Test: deferred_destroy_refs ...passed
00:07:41.384    Test: deferred_destroy_channels ...passed
00:07:41.384    Test: deferred_destroy_threads ...passed
00:07:41.384  
00:07:41.384  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.384                suites      1      1    n/a      0        0
00:07:41.384                 tests      8      8      8      0        0
00:07:41.384               asserts    119    119    119      0      n/a
00:07:41.384  
00:07:41.384  Elapsed time =    0.001 seconds
00:07:41.384   04:57:55 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:07:41.384  
00:07:41.384  
00:07:41.384       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.384       http://cunit.sourceforge.net/
00:07:41.384  
00:07:41.384  
00:07:41.384  Suite: tree
00:07:41.384    Test: blobfs_tree_op_test ...passed
00:07:41.384  
00:07:41.384  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.384                suites      1      1    n/a      0        0
00:07:41.384                 tests      1      1      1      0        0
00:07:41.384               asserts     27     27     27      0      n/a
00:07:41.384  
00:07:41.384  Elapsed time =    0.000 seconds
00:07:41.384   04:57:55 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:07:41.642  
00:07:41.642  
00:07:41.642       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.642       http://cunit.sourceforge.net/
00:07:41.642  
00:07:41.642  
00:07:41.642  Suite: blobfs_async_ut
00:07:41.642    Test: fs_init ...passed
00:07:41.642    Test: fs_open ...passed
00:07:41.642    Test: fs_create ...passed
00:07:41.642    Test: fs_truncate ...passed
00:07:41.642    Test: fs_rename ...[2024-11-20 04:57:55.480672] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:07:41.642  passed
00:07:41.642    Test: fs_rw_async ...passed
00:07:41.642    Test: fs_writev_readv_async ...passed
00:07:41.642    Test: tree_find_buffer_ut ...passed
00:07:41.642    Test: channel_ops ...passed
00:07:41.642    Test: channel_ops_sync ...passed
00:07:41.642  
00:07:41.642  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.642                suites      1      1    n/a      0        0
00:07:41.642                 tests     10     10     10      0        0
00:07:41.642               asserts    292    292    292      0      n/a
00:07:41.642  
00:07:41.642  Elapsed time =    0.178 seconds
00:07:41.642   04:57:55 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:07:41.642  
00:07:41.642  
00:07:41.642       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.642       http://cunit.sourceforge.net/
00:07:41.642  
00:07:41.642  
00:07:41.642  Suite: blobfs_sync_ut
00:07:41.901    Test: cache_read_after_write ...[2024-11-20 04:57:55.637348] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:07:41.901  passed
00:07:41.901    Test: file_length ...passed
00:07:41.901    Test: append_write_to_extend_blob ...passed
00:07:41.901    Test: partial_buffer ...passed
00:07:41.901    Test: cache_write_null_buffer ...passed
00:07:41.901    Test: fs_create_sync ...passed
00:07:41.901    Test: fs_rename_sync ...passed
00:07:41.901    Test: cache_append_no_cache ...passed
00:07:41.901    Test: fs_delete_file_without_close ...passed
00:07:41.901  
00:07:41.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.901                suites      1      1    n/a      0        0
00:07:41.901                 tests      9      9      9      0        0
00:07:41.901               asserts    345    345    345      0      n/a
00:07:41.901  
00:07:41.901  Elapsed time =    0.311 seconds
00:07:41.901   04:57:55 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:07:41.901  
00:07:41.901  
00:07:41.901       CUnit - A unit testing framework for C - Version 2.1-3
00:07:41.901       http://cunit.sourceforge.net/
00:07:41.901  
00:07:41.901  
00:07:41.901  Suite: blobfs_bdev_ut
00:07:41.901    Test: spdk_blobfs_bdev_detect_test ...[2024-11-20 04:57:55.799026] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:41.901  passed
00:07:41.901    Test: spdk_blobfs_bdev_create_test ...[2024-11-20 04:57:55.799371] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:07:41.901  passed
00:07:41.901    Test: spdk_blobfs_bdev_mount_test ...passed
00:07:41.901  
00:07:41.901  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:41.901                suites      1      1    n/a      0        0
00:07:41.901                 tests      3      3      3      0        0
00:07:41.901               asserts      9      9      9      0      n/a
00:07:41.901  
00:07:41.901  Elapsed time =    0.001 seconds
00:07:41.901  
00:07:41.901  real	0m13.315s
00:07:41.901  user	0m12.773s
00:07:41.901  sys	0m0.711s
00:07:41.901   04:57:55 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:41.901   04:57:55 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x
00:07:41.901  ************************************
00:07:41.901  END TEST unittest_blob_blobfs
00:07:41.901  ************************************
00:07:41.901   04:57:55 unittest -- unit/unittest.sh@216 -- # run_test unittest_event unittest_event
00:07:41.901   04:57:55 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:41.901   04:57:55 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:41.901   04:57:55 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:42.161  ************************************
00:07:42.161  START TEST unittest_event
00:07:42.161  ************************************
00:07:42.161   04:57:55 unittest.unittest_event -- common/autotest_common.sh@1129 -- # unittest_event
00:07:42.161   04:57:55 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:07:42.161  
00:07:42.161  
00:07:42.161       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.161       http://cunit.sourceforge.net/
00:07:42.161  
00:07:42.161  
00:07:42.161  Suite: app_suite
00:07:42.161    Test: test_spdk_app_parse_args ...app_ut [options]
00:07:42.161  
00:07:42.161  CPU options:
00:07:42.161   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:42.161                                   (like [0,1,10])
00:07:42.161       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.161                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.161                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.161                             Within the group, '-' is used for range separator,
00:07:42.161                             ',' is used for single number separator.
00:07:42.161                             '( )' can be omitted for single element group,
00:07:42.161                             '@' can be omitted if cpus and lcores have the same value
00:07:42.161       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.161       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:42.161                             pollers in the app support interrupt mode)
00:07:42.161   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.161  
00:07:42.161  Configuration options:
00:07:42.161   -c, --config, --json  <config>     JSON config file
00:07:42.161   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.161       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:42.161       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.161       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.161       --json-ignore-init-errors    don't exit on invalid config entry
00:07:42.161  
00:07:42.161  Memory options:
00:07:42.161       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.161       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.161       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.161   -R, --huge-unlink         unlink huge files after initialization
00:07:42.161   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:42.161   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.161       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.161       --no-huge             run without using hugepages
00:07:42.161       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:42.161   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.161   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:42.161  
00:07:42.161  PCI options:
00:07:42.161   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.161   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.161   -u, --no-pci              disable PCI access
00:07:42.161       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.161  
00:07:42.161  Log options:
00:07:42.161   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:42.161       --silence-noticelog   disable notice level logging to stderr
00:07:42.161  
00:07:42.161  Trace options:
00:07:42.161       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:42.161                                   setting 0 to disable trace (default 32768)
00:07:42.161                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.161   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.161                             group_name - tracepoint group name for spdk trace buffers (app_ut: invalid option -- 'z'
00:07:42.161  thread, all).
00:07:42.161                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:42.161                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:42.161                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:42.161                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:42.161                             in /include/spdk_internal/trace_defs.h
00:07:42.161  
00:07:42.161  Other options:
00:07:42.161   -h, --help                show this usage
00:07:42.161   -v, --version             print SPDK version
00:07:42.161   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.161       --env-context         Opaque context for use of the env implementation
00:07:42.161  app_ut [options]
00:07:42.161  
00:07:42.161  CPU options:
00:07:42.161   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:42.161                                   (like [0,1,10])
00:07:42.161       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.161  app_ut: unrecognized option '--test-long-opt'
00:07:42.161                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.161                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.161                             Within the group, '-' is used for range separator,
00:07:42.161                             ',' is used for single number separator.
00:07:42.161                             '( )' can be omitted for single element group,
00:07:42.161                             '@' can be omitted if cpus and lcores have the same value
00:07:42.161       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.161       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:42.161                             pollers in the app support interrupt mode)
00:07:42.161   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.161  
00:07:42.161  Configuration options:
00:07:42.161   -c, --config, --json  <config>     JSON config file
00:07:42.161   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.161       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:42.161       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.161       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.161       --json-ignore-init-errors    don't exit on invalid config entry
00:07:42.161  
00:07:42.161  Memory options:
00:07:42.161       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.161       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.161       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.161   -R, --huge-unlink         unlink huge files after initialization
00:07:42.161   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:42.161   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.161       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.161       --no-huge             run without using hugepages
00:07:42.161       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:42.161   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.161   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:42.161  
00:07:42.161  PCI options:
00:07:42.161   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.161   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.161   -u, --no-pci              disable PCI access
00:07:42.161       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.161  
00:07:42.161  Log options:
00:07:42.161   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:42.161       --silence-noticelog   disable notice level logging to stderr
00:07:42.161  
00:07:42.161  Trace options:
00:07:42.161       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:42.161                                   setting 0 to disable trace (default 32768)
00:07:42.161                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.161   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.161                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:42.161                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:42.161                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:42.161                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:42.161                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:42.161                             in /include/spdk_internal/trace_defs.h
00:07:42.161  
00:07:42.161  Other options:
00:07:42.161   -h, --help                show this usage
00:07:42.161   -v, --version             print SPDK version
00:07:42.161   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.161       --env-context         Opaque context for use of the env implementation
00:07:42.161  [2024-11-20 04:57:55.882768] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1204:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:07:42.161  app_ut [options]
00:07:42.161  
00:07:42.161  CPU options:
00:07:42.161   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:07:42.161                                   (like [0,1,10])
00:07:42.162       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:07:42.162                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:07:42.162                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:07:42.162                             Within the group, '-' is used for range separator,
00:07:42.162                             ',' is used for single number separator.
00:07:42.162  [2024-11-20 04:57:55.883022] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1388:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:07:42.162                             '( )' can be omitted for single element group,
00:07:42.162                             '@' can be omitted if cpus and lcores have the same value
00:07:42.162       --disable-cpumask-locks    Disable CPU core lock files.
00:07:42.162       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:07:42.162                             pollers in the app support interrupt mode)
00:07:42.162   -p, --main-core <id>      main (primary) core for DPDK
00:07:42.162  
00:07:42.162  Configuration options:
00:07:42.162   -c, --config, --json  <config>     JSON config file
00:07:42.162   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:07:42.162       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:07:42.162       --wait-for-rpc        wait for RPCs to initialize subsystems
00:07:42.162       --rpcs-allowed	   comma-separated list of permitted RPCS
00:07:42.162       --json-ignore-init-errors    don't exit on invalid config entry
00:07:42.162  
00:07:42.162  Memory options:
00:07:42.162       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:07:42.162       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:07:42.162       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:07:42.162   -R, --huge-unlink         unlink huge files after initialization
00:07:42.162   -n, --mem-channels <num>  number of memory channels used for DPDK
00:07:42.162   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:07:42.162       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:07:42.162       --no-huge             run without using hugepages
00:07:42.162       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:07:42.162   -i, --shm-id <id>         shared memory ID (optional)
00:07:42.162   -g, --single-file-segments   force creating just one hugetlbfs file
00:07:42.162  
00:07:42.162  PCI options:
00:07:42.162   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:07:42.162   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:07:42.162   -u, --no-pci              disable PCI access
00:07:42.162       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:07:42.162  
00:07:42.162  Log options:
00:07:42.162   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:07:42.162       --silence-noticelog   disable notice level logging to stderr
00:07:42.162  
00:07:42.162  Trace options:
00:07:42.162       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:07:42.162                                   setting 0 to disable trace (default 32768)
00:07:42.162                                   Tracepoints vary in size and can use more than one trace entry.
00:07:42.162   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:07:42.162                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:07:42.162                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:07:42.162                             a tracepoint group. First tpoint inside a group can be enabled by
00:07:42.162                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:07:42.162                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:07:42.162                             in /include/spdk_internal/trace_defs.h
00:07:42.162  
00:07:42.162  Other options:
00:07:42.162   -h, --help                show this usage
00:07:42.162   -v, --version             print SPDK version
00:07:42.162   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:07:42.162       --env-context         Opaque context for use of the env implementation
00:07:42.162  passed
00:07:42.162  
00:07:42.162  [2024-11-20 04:57:55.883224] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1290:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:07:42.162  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.162                suites      1      1    n/a      0        0
00:07:42.162                 tests      1      1      1      0        0
00:07:42.162               asserts      8      8      8      0      n/a
00:07:42.162  
00:07:42.162  Elapsed time =    0.001 seconds
00:07:42.162   04:57:55 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:07:42.162  
00:07:42.162  
00:07:42.162       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.162       http://cunit.sourceforge.net/
00:07:42.162  
00:07:42.162  
00:07:42.162  Suite: app_suite
00:07:42.162    Test: test_create_reactor ...passed
00:07:42.162    Test: test_init_reactors ...passed
00:07:42.162    Test: test_event_call ...passed
00:07:42.162    Test: test_schedule_thread ...passed
00:07:42.162    Test: test_reschedule_thread ...passed
00:07:42.162    Test: test_bind_thread ...passed
00:07:42.162    Test: test_for_each_reactor ...passed
00:07:42.162    Test: test_reactor_stats ...passed
00:07:42.162    Test: test_scheduler ...passed
00:07:42.162    Test: test_governor ...passed
00:07:42.162    Test: test_scheduler_set_isolated_core_mask ...[2024-11-20 04:57:55.933087] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:42.162  [2024-11-20 04:57:55.933434] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:07:42.162  passed
00:07:42.162    Test: test_mixed_workload ...passed
00:07:42.162  
00:07:42.162  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.162                suites      1      1    n/a      0        0
00:07:42.162                 tests     12     12     12      0        0
00:07:42.162               asserts    344    344    344      0      n/a
00:07:42.162  
00:07:42.162  Elapsed time =    0.025 seconds
00:07:42.162  
00:07:42.162  real	0m0.096s
00:07:42.162  user	0m0.064s
00:07:42.162  sys	0m0.032s
00:07:42.162   04:57:55 unittest.unittest_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:42.162   04:57:55 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x
00:07:42.162  ************************************
00:07:42.162  END TEST unittest_event
00:07:42.162  ************************************
00:07:42.162    04:57:55 unittest -- unit/unittest.sh@217 -- # uname -s
00:07:42.162   04:57:55 unittest -- unit/unittest.sh@217 -- # '[' Linux = Linux ']'
00:07:42.162   04:57:55 unittest -- unit/unittest.sh@218 -- # run_test unittest_ftl unittest_ftl
00:07:42.162   04:57:55 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:42.162   04:57:55 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:42.162   04:57:55 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:42.162  ************************************
00:07:42.162  START TEST unittest_ftl
00:07:42.162  ************************************
00:07:42.162   04:57:56 unittest.unittest_ftl -- common/autotest_common.sh@1129 -- # unittest_ftl
00:07:42.162   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@56 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_band.c/ftl_band_ut
00:07:42.162  
00:07:42.162  
00:07:42.162       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.162       http://cunit.sourceforge.net/
00:07:42.162  
00:07:42.162  
00:07:42.162  Suite: ftl_band_suite
00:07:42.162    Test: test_band_block_offset_from_addr_base ...passed
00:07:42.162    Test: test_band_block_offset_from_addr_offset ...passed
00:07:42.420    Test: test_band_addr_from_block_offset ...passed
00:07:42.420    Test: test_band_set_addr ...passed
00:07:42.420    Test: test_invalidate_addr ...passed
00:07:42.420    Test: test_next_xfer_addr ...passed
00:07:42.420  
00:07:42.420  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.420                suites      1      1    n/a      0        0
00:07:42.420                 tests      6      6      6      0        0
00:07:42.420               asserts  30356  30356  30356      0      n/a
00:07:42.420  
00:07:42.420  Elapsed time =    0.180 seconds
00:07:42.420   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@57 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_bitmap.c/ftl_bitmap_ut
00:07:42.420  
00:07:42.420  
00:07:42.420       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.420       http://cunit.sourceforge.net/
00:07:42.420  
00:07:42.420  
00:07:42.420  Suite: ftl_bitmap
00:07:42.420    Test: test_ftl_bitmap_create ...[2024-11-20 04:57:56.290905] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  52:ftl_bitmap_create: *ERROR*: Buffer for bitmap must be aligned to 8 bytes
00:07:42.420  [2024-11-20 04:57:56.291329] /home/vagrant/spdk_repo/spdk/lib/ftl/utils/ftl_bitmap.c:  58:ftl_bitmap_create: *ERROR*: Size of buffer for bitmap must be divisible by 8 bytes
00:07:42.420  passed
00:07:42.420    Test: test_ftl_bitmap_get ...passed
00:07:42.420    Test: test_ftl_bitmap_set ...passed
00:07:42.420    Test: test_ftl_bitmap_clear ...passed
00:07:42.420    Test: test_ftl_bitmap_find_first_set ...passed
00:07:42.420    Test: test_ftl_bitmap_find_first_clear ...passed
00:07:42.420    Test: test_ftl_bitmap_count_set ...passed
00:07:42.420  
00:07:42.420  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.421                suites      1      1    n/a      0        0
00:07:42.421                 tests      7      7      7      0        0
00:07:42.421               asserts    137    137    137      0      n/a
00:07:42.421  
00:07:42.421  Elapsed time =    0.001 seconds
00:07:42.421   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@58 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_io.c/ftl_io_ut
00:07:42.421  
00:07:42.421  
00:07:42.421       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.421       http://cunit.sourceforge.net/
00:07:42.421  
00:07:42.421  
00:07:42.421  Suite: ftl_io_suite
00:07:42.421    Test: test_completion ...passed
00:07:42.421    Test: test_multiple_ios ...passed
00:07:42.421  
00:07:42.421  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.421                suites      1      1    n/a      0        0
00:07:42.421                 tests      2      2      2      0        0
00:07:42.421               asserts     47     47     47      0      n/a
00:07:42.421  
00:07:42.421  Elapsed time =    0.004 seconds
00:07:42.421   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@59 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mngt/ftl_mngt_ut
00:07:42.421  
00:07:42.421  
00:07:42.421       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.421       http://cunit.sourceforge.net/
00:07:42.421  
00:07:42.421  
00:07:42.421  Suite: ftl_mngt
00:07:42.421    Test: test_next_step ...passed
00:07:42.421    Test: test_continue_step ...passed
00:07:42.421    Test: test_get_func_and_step_cntx_alloc ...passed
00:07:42.421    Test: test_fail_step ...passed
00:07:42.421    Test: test_mngt_call_and_call_rollback ...passed
00:07:42.421    Test: test_nested_process_failure ...passed
00:07:42.421    Test: test_call_init_success ...passed
00:07:42.421    Test: test_call_init_failure ...passed
00:07:42.421  
00:07:42.421  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.421                suites      1      1    n/a      0        0
00:07:42.421                 tests      8      8      8      0        0
00:07:42.421               asserts    196    196    196      0      n/a
00:07:42.421  
00:07:42.421  Elapsed time =    0.002 seconds
00:07:42.421   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@60 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_mempool.c/ftl_mempool_ut
00:07:42.679  
00:07:42.679  
00:07:42.679       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.679       http://cunit.sourceforge.net/
00:07:42.679  
00:07:42.679  
00:07:42.679  Suite: ftl_mempool
00:07:42.679    Test: test_ftl_mempool_create ...passed
00:07:42.679    Test: test_ftl_mempool_get_put ...passed
00:07:42.679  
00:07:42.680  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.680                suites      1      1    n/a      0        0
00:07:42.680                 tests      2      2      2      0        0
00:07:42.680               asserts     36     36     36      0      n/a
00:07:42.680  
00:07:42.680  Elapsed time =    0.000 seconds
00:07:42.680   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@61 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_l2p/ftl_l2p_ut
00:07:42.680  
00:07:42.680  
00:07:42.680       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.680       http://cunit.sourceforge.net/
00:07:42.680  
00:07:42.680  
00:07:42.680  Suite: ftl_addr64_suite
00:07:42.680    Test: test_addr_cached ...passed
00:07:42.680  
00:07:42.680  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.680                suites      1      1    n/a      0        0
00:07:42.680                 tests      1      1      1      0        0
00:07:42.680               asserts   1536   1536   1536      0      n/a
00:07:42.680  
00:07:42.680  Elapsed time =    0.000 seconds
00:07:42.680   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@62 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_sb/ftl_sb_ut
00:07:42.680  
00:07:42.680  
00:07:42.680       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.680       http://cunit.sourceforge.net/
00:07:42.680  
00:07:42.680  
00:07:42.680  Suite: ftl_sb
00:07:42.680    Test: test_sb_crc_v2 ...passed
00:07:42.680    Test: test_sb_crc_v3 ...passed
00:07:42.680    Test: test_sb_v3_md_layout ...[2024-11-20 04:57:56.447550] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 143:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Missing regions
00:07:42.680  [2024-11-20 04:57:56.447850] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 131:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:42.680  [2024-11-20 04:57:56.447901] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:42.680  [2024-11-20 04:57:56.447950] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 115:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Buffer overflow
00:07:42.680  [2024-11-20 04:57:56.447981] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:42.680  [2024-11-20 04:57:56.448048] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  93:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Unsupported MD region type found
00:07:42.680  [2024-11-20 04:57:56.448079] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:42.680  [2024-11-20 04:57:56.448143] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c:  88:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Invalid MD region type found
00:07:42.680  [2024-11-20 04:57:56.448212] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 125:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Looping regions found
00:07:42.680  [2024-11-20 04:57:56.448252] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:42.680  [2024-11-20 04:57:56.448290] /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_sb_v3.c: 105:ftl_superblock_v3_md_layout_load_all: *ERROR*: [FTL][(null)] Multiple/looping regions found
00:07:42.680  passed
00:07:42.680    Test: test_sb_v5_md_layout ...passed
00:07:42.680  
00:07:42.680  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.680                suites      1      1    n/a      0        0
00:07:42.680                 tests      4      4      4      0        0
00:07:42.680               asserts    170    170    170      0      n/a
00:07:42.680  
00:07:42.680  Elapsed time =    0.002 seconds
00:07:42.680   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@63 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_layout_upgrade/ftl_layout_upgrade_ut
00:07:42.680  
00:07:42.680  
00:07:42.680       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.680       http://cunit.sourceforge.net/
00:07:42.680  
00:07:42.680  
00:07:42.680  Suite: ftl_layout_upgrade
00:07:42.680    Test: test_l2p_upgrade ...passed
00:07:42.680  
00:07:42.680  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:42.680                suites      1      1    n/a      0        0
00:07:42.680                 tests      1      1      1      0        0
00:07:42.680               asserts    164    164    164      0      n/a
00:07:42.680  
00:07:42.680  Elapsed time =    0.001 seconds
00:07:42.680   04:57:56 unittest.unittest_ftl -- unit/unittest.sh@64 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ftl/ftl_p2l.c/ftl_p2l_ut
00:07:42.680  
00:07:42.680  
00:07:42.680       CUnit - A unit testing framework for C - Version 2.1-3
00:07:42.680       http://cunit.sourceforge.net/
00:07:42.680  
00:07:42.680  
00:07:42.680  Suite: ftl_p2l_suite
00:07:42.680    Test: test_p2l_num_pages ...passed
00:07:43.246    Test: test_ckpt_issue ...passed
00:07:43.505    Test: test_persist_band_p2l ...passed
00:07:44.073    Test: test_clean_restore_p2l ...passed
00:07:45.016    Test: test_dirty_restore_p2l ...passed
00:07:45.016  
00:07:45.016  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.016                suites      1      1    n/a      0        0
00:07:45.016                 tests      5      5      5      0        0
00:07:45.016               asserts  10020  10020  10020      0      n/a
00:07:45.016  
00:07:45.016  Elapsed time =    2.296 seconds
00:07:45.016  
00:07:45.016  real	0m2.817s
00:07:45.016  user	0m0.898s
00:07:45.016  sys	0m1.913s
00:07:45.016   04:57:58 unittest.unittest_ftl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.016   04:57:58 unittest.unittest_ftl -- common/autotest_common.sh@10 -- # set +x
00:07:45.016  ************************************
00:07:45.016  END TEST unittest_ftl
00:07:45.016  ************************************
00:07:45.016   04:57:58 unittest -- unit/unittest.sh@221 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:45.016   04:57:58 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.016   04:57:58 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.016   04:57:58 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.016  ************************************
00:07:45.016  START TEST unittest_accel
00:07:45.016  ************************************
00:07:45.016   04:57:58 unittest.unittest_accel -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:07:45.016  
00:07:45.016  
00:07:45.016       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.016       http://cunit.sourceforge.net/
00:07:45.016  
00:07:45.016  
00:07:45.016  Suite: accel_sequence
00:07:45.016    Test: test_sequence_fill_copy ...passed
00:07:45.016    Test: test_sequence_abort ...passed
00:07:45.016    Test: test_sequence_append_error ...passed
00:07:45.016    Test: test_sequence_completion_error ...[2024-11-20 04:57:58.911710] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fd94d3357c0
00:07:45.016  [2024-11-20 04:57:58.912038] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2382:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x7fd94d3357c0
00:07:45.016  [2024-11-20 04:57:58.912166] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x7fd94d3357c0
00:07:45.016  [2024-11-20 04:57:58.912230] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2295:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x7fd94d3357c0
00:07:45.016  passed
00:07:45.016    Test: test_sequence_decompress ...passed
00:07:45.016    Test: test_sequence_reverse ...passed
00:07:45.016    Test: test_sequence_copy_elision ...passed
00:07:45.017    Test: test_sequence_accel_buffers ...passed
00:07:45.017    Test: test_sequence_memory_domain ...[2024-11-20 04:57:58.924872] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2187:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:07:45.017  [2024-11-20 04:57:58.925080] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2226:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -98
00:07:45.017  passed
00:07:45.017    Test: test_sequence_module_memory_domain ...passed
00:07:45.017    Test: test_sequence_crypto ...passed
00:07:45.017    Test: test_sequence_driver ...[2024-11-20 04:57:58.933368] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2334:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x7fd94b8ca7c0 using driver: ut
00:07:45.017  [2024-11-20 04:57:58.933492] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2395:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x7fd94b8ca7c0 through driver: ut
00:07:45.017  passed
00:07:45.017    Test: test_sequence_same_iovs ...passed
00:07:45.017    Test: test_sequence_crc32 ...passed
00:07:45.017    Test: test_sequence_dix_generate_verify ...passed
00:07:45.017    Test: test_sequence_dix ...passed
00:07:45.017  Suite: accel
00:07:45.017    Test: test_spdk_accel_task_complete ...passed
00:07:45.017    Test: test_get_task ...passed
00:07:45.017    Test: test_spdk_accel_submit_copy ...passed
00:07:45.017    Test: test_spdk_accel_submit_dualcast ...[2024-11-20 04:57:58.944396] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:45.017  [2024-11-20 04:57:58.944467] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:07:45.017  passed
00:07:45.017    Test: test_spdk_accel_submit_compare ...passed
00:07:45.017    Test: test_spdk_accel_submit_fill ...passed
00:07:45.017    Test: test_spdk_accel_submit_crc32c ...passed
00:07:45.017    Test: test_spdk_accel_submit_crc32cv ...passed
00:07:45.017    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:07:45.017    Test: test_spdk_accel_submit_xor ...passed
00:07:45.017    Test: test_spdk_accel_module_find_by_name ...passed
00:07:45.017    Test: test_spdk_accel_module_register ...passed
00:07:45.017  
00:07:45.017  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.017                suites      2      2    n/a      0        0
00:07:45.017                 tests     28     28     28      0        0
00:07:45.017               asserts    884    884    884      0      n/a
00:07:45.017  
00:07:45.017  Elapsed time =    0.045 seconds
00:07:45.284  
00:07:45.284  real	0m0.088s
00:07:45.284  user	0m0.045s
00:07:45.284  sys	0m0.044s
00:07:45.284   04:57:58 unittest.unittest_accel -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.284  ************************************
00:07:45.284  END TEST unittest_accel
00:07:45.284   04:57:58 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest -- unit/unittest.sh@222 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284  START TEST unittest_ioat
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest.unittest_ioat -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:07:45.284  
00:07:45.284  
00:07:45.284       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.284       http://cunit.sourceforge.net/
00:07:45.284  
00:07:45.284  
00:07:45.284  Suite: ioat
00:07:45.284    Test: ioat_state_check ...passed
00:07:45.284  
00:07:45.284  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.284                suites      1      1    n/a      0        0
00:07:45.284                 tests      1      1      1      0        0
00:07:45.284               asserts     32     32     32      0      n/a
00:07:45.284  
00:07:45.284  Elapsed time =    0.000 seconds
00:07:45.284  
00:07:45.284  real	0m0.029s
00:07:45.284  user	0m0.012s
00:07:45.284  sys	0m0.017s
00:07:45.284   04:57:59 unittest.unittest_ioat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.284   04:57:59 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284  END TEST unittest_ioat
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest -- unit/unittest.sh@223 -- # [[ y == y ]]
00:07:45.284   04:57:59 unittest -- unit/unittest.sh@224 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284  START TEST unittest_idxd_user
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest.unittest_idxd_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:07:45.284  
00:07:45.284  
00:07:45.284       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.284       http://cunit.sourceforge.net/
00:07:45.284  
00:07:45.284  
00:07:45.284  Suite: idxd_user
00:07:45.284    Test: test_idxd_wait_cmd ...[2024-11-20 04:57:59.114597] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:45.284  passed
00:07:45.284    Test: test_idxd_reset_dev ...[2024-11-20 04:57:59.114903] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:07:45.284  [2024-11-20 04:57:59.115073] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:07:45.284  passed
00:07:45.284    Test: test_idxd_group_config ...passed
00:07:45.284    Test: test_idxd_wq_config ...passed
00:07:45.284  
00:07:45.284  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.284                suites      1      1    n/a      0        0
00:07:45.284                 tests      4      4      4      0        0
00:07:45.284               asserts     20     20     20      0      n/a
00:07:45.284  
00:07:45.284  Elapsed time =    0.001 seconds
00:07:45.284  [2024-11-20 04:57:59.115133] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:07:45.284  
00:07:45.284  real	0m0.030s
00:07:45.284  user	0m0.021s
00:07:45.284  sys	0m0.010s
00:07:45.284   04:57:59 unittest.unittest_idxd_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.284   04:57:59 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284  END TEST unittest_idxd_user
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest -- unit/unittest.sh@226 -- # run_test unittest_iscsi unittest_iscsi
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.284   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.284  ************************************
00:07:45.284  START TEST unittest_iscsi
00:07:45.284  ************************************
00:07:45.284   04:57:59 unittest.unittest_iscsi -- common/autotest_common.sh@1129 -- # unittest_iscsi
00:07:45.284   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:07:45.284  
00:07:45.284  
00:07:45.284       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.284       http://cunit.sourceforge.net/
00:07:45.284  
00:07:45.284  
00:07:45.284  Suite: conn_suite
00:07:45.284    Test: read_task_split_in_order_case ...passed
00:07:45.284    Test: read_task_split_reverse_order_case ...passed
00:07:45.284    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:07:45.284    Test: process_non_read_task_completion_test ...passed
00:07:45.284    Test: free_tasks_on_connection ...passed
00:07:45.284    Test: free_tasks_with_queued_datain ...passed
00:07:45.284    Test: abort_queued_datain_task_test ...passed
00:07:45.284    Test: abort_queued_datain_tasks_test ...passed
00:07:45.284  
00:07:45.284  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.284                suites      1      1    n/a      0        0
00:07:45.284                 tests      8      8      8      0        0
00:07:45.284               asserts    230    230    230      0      n/a
00:07:45.284  
00:07:45.284  Elapsed time =    0.000 seconds
00:07:45.284   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:07:45.284  
00:07:45.284  
00:07:45.284       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.284       http://cunit.sourceforge.net/
00:07:45.284  
00:07:45.284  
00:07:45.284  Suite: iscsi_suite
00:07:45.543    Test: param_negotiation_test ...passed
00:07:45.543    Test: list_negotiation_test ...passed
00:07:45.543    Test: parse_valid_test ...passed
00:07:45.543    Test: parse_invalid_test ...[2024-11-20 04:57:59.240605] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:45.543  [2024-11-20 04:57:59.240898] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:07:45.543  [2024-11-20 04:57:59.240972] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key
00:07:45.543  [2024-11-20 04:57:59.241032] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:07:45.543  [2024-11-20 04:57:59.241181] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256
00:07:45.543  [2024-11-20 04:57:59.241249] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:07:45.543  [2024-11-20 04:57:59.241401] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B
00:07:45.543  passed
00:07:45.543  
00:07:45.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.543                suites      1      1    n/a      0        0
00:07:45.543                 tests      4      4      4      0        0
00:07:45.543               asserts    161    161    161      0      n/a
00:07:45.543  
00:07:45.543  Elapsed time =    0.005 seconds
00:07:45.543   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:07:45.543  
00:07:45.543  
00:07:45.543       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.543       http://cunit.sourceforge.net/
00:07:45.543  
00:07:45.543  
00:07:45.543  Suite: iscsi_target_node_suite
00:07:45.543    Test: add_lun_test_cases ...[2024-11-20 04:57:59.274716] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1252:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:07:45.543  [2024-11-20 04:57:59.275015] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:07:45.543  [2024-11-20 04:57:59.275130] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:45.544  [2024-11-20 04:57:59.275176] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:07:45.544  passed
00:07:45.544    Test: allow_any_allowed ...passed
00:07:45.544    Test: allow_ipv6_allowed ...[2024-11-20 04:57:59.275213] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:07:45.544  passed
00:07:45.544    Test: allow_ipv6_denied ...passed
00:07:45.544    Test: allow_ipv6_invalid ...passed
00:07:45.544    Test: allow_ipv4_allowed ...passed
00:07:45.544    Test: allow_ipv4_denied ...passed
00:07:45.544    Test: allow_ipv4_invalid ...passed
00:07:45.544    Test: node_access_allowed ...passed
00:07:45.544    Test: node_access_denied_by_empty_netmask ...passed
00:07:45.544    Test: node_access_multi_initiator_groups_cases ...passed
00:07:45.544    Test: allow_iscsi_name_multi_maps_case ...passed
00:07:45.544    Test: chap_param_test_cases ...[2024-11-20 04:57:59.275668] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:07:45.544  [2024-11-20 04:57:59.275713] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:07:45.544  [2024-11-20 04:57:59.275775] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:07:45.544  [2024-11-20 04:57:59.275812] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1039:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:07:45.544  [2024-11-20 04:57:59.275855] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:07:45.544  passed
00:07:45.544  
00:07:45.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.544                suites      1      1    n/a      0        0
00:07:45.544                 tests     13     13     13      0        0
00:07:45.544               asserts     50     50     50      0      n/a
00:07:45.544  
00:07:45.544  Elapsed time =    0.001 seconds
00:07:45.544   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:07:45.544  
00:07:45.544  
00:07:45.544       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.544       http://cunit.sourceforge.net/
00:07:45.544  
00:07:45.544  
00:07:45.544  Suite: iscsi_suite
00:07:45.544    Test: op_login_check_target_test ...[2024-11-20 04:57:59.315420] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied
00:07:45.544  passed
00:07:45.544    Test: op_login_session_normal_test ...[2024-11-20 04:57:59.315758] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:45.544  [2024-11-20 04:57:59.315808] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:45.544  [2024-11-20 04:57:59.315853] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:07:45.544  [2024-11-20 04:57:59.315919] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:07:45.544  [2024-11-20 04:57:59.316033] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:45.544  [2024-11-20 04:57:59.316134] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 702:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:07:45.544  [2024-11-20 04:57:59.316194] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1472:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:07:45.544  passed
00:07:45.544    Test: maxburstlength_test ...[2024-11-20 04:57:59.316451] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:45.544  passed
00:07:45.544    Test: underflow_for_read_transfer_test ...[2024-11-20 04:57:59.316525] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4566:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:07:45.544  passed
00:07:45.544    Test: underflow_for_zero_read_transfer_test ...passed
00:07:45.544    Test: underflow_for_request_sense_test ...passed
00:07:45.544    Test: underflow_for_check_condition_test ...passed
00:07:45.544    Test: add_transfer_task_test ...passed
00:07:45.544    Test: get_transfer_task_test ...passed
00:07:45.544    Test: del_transfer_task_test ...passed
00:07:45.544    Test: clear_all_transfer_tasks_test ...passed
00:07:45.544    Test: build_iovs_test ...passed
00:07:45.544    Test: build_iovs_with_md_test ...passed
00:07:45.544    Test: pdu_hdr_op_login_test ...[2024-11-20 04:57:59.318056] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error
00:07:45.544  [2024-11-20 04:57:59.318192] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1263:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:07:45.544  [2024-11-20 04:57:59.318275] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_text_test ...[2024-11-20 04:57:59.318375] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2258:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:45.544  [2024-11-20 04:57:59.318482] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_logout_test ...[2024-11-20 04:57:59.318544] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2303:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:07:45.544  [2024-11-20 04:57:59.318633] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2533:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_scsi_test ...[2024-11-20 04:57:59.318793] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:45.544  [2024-11-20 04:57:59.318836] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:07:45.544  [2024-11-20 04:57:59.318889] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:07:45.544  [2024-11-20 04:57:59.318992] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3415:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:07:45.544  [2024-11-20 04:57:59.319096] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3422:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:07:45.544  [2024-11-20 04:57:59.319292] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_task_mgmt_test ...[2024-11-20 04:57:59.319405] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:07:45.544  [2024-11-20 04:57:59.319472] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_nopout_test ...[2024-11-20 04:57:59.319692] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:07:45.544  [2024-11-20 04:57:59.319804] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:45.544  passed
00:07:45.544    Test: pdu_hdr_op_data_test ...[2024-11-20 04:57:59.319845] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:07:45.544  [2024-11-20 04:57:59.319881] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:07:45.544  [2024-11-20 04:57:59.319927] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:07:45.544  [2024-11-20 04:57:59.319998] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:07:45.544  [2024-11-20 04:57:59.320067] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:07:45.544  [2024-11-20 04:57:59.320133] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4234:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:07:45.544  [2024-11-20 04:57:59.320188] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:07:45.544  [2024-11-20 04:57:59.320270] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:07:45.544  [2024-11-20 04:57:59.320317] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4261:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:07:45.544  passed
00:07:45.544    Test: empty_text_with_cbit_test ...passed
00:07:45.544    Test: pdu_payload_read_test ...[2024-11-20 04:57:59.322507] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4649:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:07:45.544  passed
00:07:45.544    Test: data_out_pdu_sequence_test ...passed
00:07:45.544    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:07:45.544  
00:07:45.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.544                suites      1      1    n/a      0        0
00:07:45.544                 tests     24     24     24      0        0
00:07:45.544               asserts 150253 150253 150253      0      n/a
00:07:45.544  
00:07:45.544  Elapsed time =    0.017 seconds
00:07:45.544   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:07:45.544  
00:07:45.544  
00:07:45.544       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.544       http://cunit.sourceforge.net/
00:07:45.544  
00:07:45.544  
00:07:45.544  Suite: init_grp_suite
00:07:45.544    Test: create_initiator_group_success_case ...passed
00:07:45.544    Test: find_initiator_group_success_case ...passed
00:07:45.544    Test: register_initiator_group_twice_case ...passed
00:07:45.544    Test: add_initiator_name_success_case ...passed
00:07:45.545    Test: add_initiator_name_fail_case ...[2024-11-20 04:57:59.363016] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:07:45.545  passed
00:07:45.545    Test: delete_all_initiator_names_success_case ...passed
00:07:45.545    Test: add_netmask_success_case ...passed
00:07:45.545    Test: add_netmask_fail_case ...[2024-11-20 04:57:59.363465] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:07:45.545  passed
00:07:45.545    Test: delete_all_netmasks_success_case ...passed
00:07:45.545    Test: initiator_name_overwrite_all_to_any_case ...passed
00:07:45.545    Test: netmask_overwrite_all_to_any_case ...passed
00:07:45.545    Test: add_delete_initiator_names_case ...passed
00:07:45.545    Test: add_duplicated_initiator_names_case ...passed
00:07:45.545    Test: delete_nonexisting_initiator_names_case ...passed
00:07:45.545    Test: add_delete_netmasks_case ...passed
00:07:45.545    Test: add_duplicated_netmasks_case ...passed
00:07:45.545    Test: delete_nonexisting_netmasks_case ...passed
00:07:45.545  
00:07:45.545  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.545                suites      1      1    n/a      0        0
00:07:45.545                 tests     17     17     17      0        0
00:07:45.545               asserts    108    108    108      0      n/a
00:07:45.545  
00:07:45.545  Elapsed time =    0.001 seconds
00:07:45.545   04:57:59 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:07:45.545  
00:07:45.545  
00:07:45.545       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.545       http://cunit.sourceforge.net/
00:07:45.545  
00:07:45.545  
00:07:45.545  Suite: portal_grp_suite
00:07:45.545    Test: portal_create_ipv4_normal_case ...passed
00:07:45.545    Test: portal_create_ipv6_normal_case ...passed
00:07:45.545    Test: portal_create_ipv4_wildcard_case ...passed
00:07:45.545    Test: portal_create_ipv6_wildcard_case ...passed
00:07:45.545    Test: portal_create_twice_case ...[2024-11-20 04:57:59.391585] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:07:45.545  passed
00:07:45.545    Test: portal_grp_register_unregister_case ...passed
00:07:45.545    Test: portal_grp_register_twice_case ...passed
00:07:45.545    Test: portal_grp_add_delete_case ...passed
00:07:45.545    Test: portal_grp_add_delete_twice_case ...passed
00:07:45.545  
00:07:45.545  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.545                suites      1      1    n/a      0        0
00:07:45.545                 tests      9      9      9      0        0
00:07:45.545               asserts     44     44     44      0      n/a
00:07:45.545  
00:07:45.545  Elapsed time =    0.003 seconds
00:07:45.545  
00:07:45.545  real	0m0.224s
00:07:45.545  user	0m0.124s
00:07:45.545  sys	0m0.101s
00:07:45.545   04:57:59 unittest.unittest_iscsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.545   04:57:59 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x
00:07:45.545  ************************************
00:07:45.545  END TEST unittest_iscsi
00:07:45.545  ************************************
00:07:45.545   04:57:59 unittest -- unit/unittest.sh@227 -- # run_test unittest_json unittest_json
00:07:45.545   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.545   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.545   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.545  ************************************
00:07:45.545  START TEST unittest_json
00:07:45.545  ************************************
00:07:45.545   04:57:59 unittest.unittest_json -- common/autotest_common.sh@1129 -- # unittest_json
00:07:45.545   04:57:59 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:07:45.545  
00:07:45.545  
00:07:45.545       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.545       http://cunit.sourceforge.net/
00:07:45.545  
00:07:45.545  
00:07:45.545  Suite: json
00:07:45.545    Test: test_parse_literal ...passed
00:07:45.545    Test: test_parse_string_simple ...passed
00:07:45.545    Test: test_parse_string_control_chars ...passed
00:07:45.545    Test: test_parse_string_utf8 ...passed
00:07:45.545    Test: test_parse_string_escapes_twochar ...passed
00:07:45.545    Test: test_parse_string_escapes_unicode ...passed
00:07:45.545    Test: test_parse_number ...passed
00:07:45.545    Test: test_parse_array ...passed
00:07:45.545    Test: test_parse_object ...passed
00:07:45.545    Test: test_parse_nesting ...passed
00:07:45.545    Test: test_parse_comment ...passed
00:07:45.545  
00:07:45.545  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.545                suites      1      1    n/a      0        0
00:07:45.545                 tests     11     11     11      0        0
00:07:45.545               asserts   1516   1516   1516      0      n/a
00:07:45.545  
00:07:45.545  Elapsed time =    0.002 seconds
00:07:45.804   04:57:59 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:07:45.804  
00:07:45.804  
00:07:45.804       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.804       http://cunit.sourceforge.net/
00:07:45.804  
00:07:45.804  
00:07:45.804  Suite: json
00:07:45.804    Test: test_strequal ...passed
00:07:45.804    Test: test_num_to_uint16 ...passed
00:07:45.804    Test: test_num_to_int32 ...passed
00:07:45.804    Test: test_num_to_uint64 ...passed
00:07:45.804    Test: test_decode_object ...passed
00:07:45.804    Test: test_decode_array ...passed
00:07:45.804    Test: test_decode_bool ...passed
00:07:45.804    Test: test_decode_uint16 ...passed
00:07:45.804    Test: test_decode_int32 ...passed
00:07:45.804    Test: test_decode_uint32 ...passed
00:07:45.804    Test: test_decode_uint64 ...passed
00:07:45.804    Test: test_decode_string ...passed
00:07:45.804    Test: test_decode_uuid ...passed
00:07:45.804    Test: test_find ...passed
00:07:45.804    Test: test_find_array ...passed
00:07:45.804    Test: test_iterating ...passed
00:07:45.804    Test: test_free_object ...passed
00:07:45.804  
00:07:45.804  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.804                suites      1      1    n/a      0        0
00:07:45.804                 tests     17     17     17      0        0
00:07:45.804               asserts    236    236    236      0      n/a
00:07:45.804  
00:07:45.804  Elapsed time =    0.001 seconds
00:07:45.804   04:57:59 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:07:45.804  
00:07:45.804  
00:07:45.804       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.804       http://cunit.sourceforge.net/
00:07:45.804  
00:07:45.804  
00:07:45.804  Suite: json
00:07:45.804    Test: test_write_literal ...passed
00:07:45.804    Test: test_write_string_simple ...passed
00:07:45.804    Test: test_write_string_escapes ...passed
00:07:45.804    Test: test_write_string_utf16le ...passed
00:07:45.804    Test: test_write_number_int32 ...passed
00:07:45.804    Test: test_write_number_uint32 ...passed
00:07:45.804    Test: test_write_number_uint128 ...passed
00:07:45.804    Test: test_write_string_number_uint128 ...passed
00:07:45.804    Test: test_write_number_int64 ...passed
00:07:45.804    Test: test_write_number_uint64 ...passed
00:07:45.804    Test: test_write_number_double ...passed
00:07:45.804    Test: test_write_uuid ...passed
00:07:45.804    Test: test_write_array ...passed
00:07:45.804    Test: test_write_object ...passed
00:07:45.804    Test: test_write_nesting ...passed
00:07:45.804    Test: test_write_val ...passed
00:07:45.804  
00:07:45.804  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.804                suites      1      1    n/a      0        0
00:07:45.804                 tests     16     16     16      0        0
00:07:45.804               asserts    918    918    918      0      n/a
00:07:45.804  
00:07:45.804  Elapsed time =    0.004 seconds
00:07:45.804   04:57:59 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:07:45.804  
00:07:45.804  
00:07:45.804       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.804       http://cunit.sourceforge.net/
00:07:45.804  
00:07:45.804  
00:07:45.804  Suite: jsonrpc
00:07:45.804    Test: test_parse_request ...passed
00:07:45.804    Test: test_parse_request_streaming ...passed
00:07:45.804  
00:07:45.804  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.804                suites      1      1    n/a      0        0
00:07:45.804                 tests      2      2      2      0        0
00:07:45.804               asserts    289    289    289      0      n/a
00:07:45.804  
00:07:45.804  Elapsed time =    0.005 seconds
00:07:45.804  
00:07:45.804  real	0m0.135s
00:07:45.804  user	0m0.068s
00:07:45.804  sys	0m0.068s
00:07:45.804   04:57:59 unittest.unittest_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.804   04:57:59 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x
00:07:45.804  ************************************
00:07:45.804  END TEST unittest_json
00:07:45.804  ************************************
00:07:45.804   04:57:59 unittest -- unit/unittest.sh@228 -- # run_test unittest_rpc unittest_rpc
00:07:45.804   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.804   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.804   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.804  ************************************
00:07:45.804  START TEST unittest_rpc
00:07:45.804  ************************************
00:07:45.804   04:57:59 unittest.unittest_rpc -- common/autotest_common.sh@1129 -- # unittest_rpc
00:07:45.804   04:57:59 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:07:45.804  
00:07:45.804  
00:07:45.804       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.804       http://cunit.sourceforge.net/
00:07:45.804  
00:07:45.804  
00:07:45.804  Suite: rpc
00:07:45.804    Test: test_jsonrpc_handler ...passed
00:07:45.804    Test: test_spdk_rpc_is_method_allowed ...passed
00:07:45.804    Test: test_rpc_get_methods ...[2024-11-20 04:57:59.665673] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:07:45.804  passed
00:07:45.804    Test: test_rpc_spdk_get_version ...passed
00:07:45.804    Test: test_spdk_rpc_listen_close ...passed
00:07:45.804    Test: test_rpc_run_multiple_servers ...passed
00:07:45.804  
00:07:45.804  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.804                suites      1      1    n/a      0        0
00:07:45.804                 tests      6      6      6      0        0
00:07:45.804               asserts     23     23     23      0      n/a
00:07:45.804  
00:07:45.804  Elapsed time =    0.001 seconds
00:07:45.804  
00:07:45.804  real	0m0.029s
00:07:45.804  user	0m0.021s
00:07:45.804  sys	0m0.009s
00:07:45.804   04:57:59 unittest.unittest_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:45.804   04:57:59 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:45.804  ************************************
00:07:45.804  END TEST unittest_rpc
00:07:45.804  ************************************
00:07:45.805   04:57:59 unittest -- unit/unittest.sh@229 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:45.805   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:45.805   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:45.805   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:45.805  ************************************
00:07:45.805  START TEST unittest_notify
00:07:45.805  ************************************
00:07:45.805   04:57:59 unittest.unittest_notify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:07:45.805  
00:07:45.805  
00:07:45.805       CUnit - A unit testing framework for C - Version 2.1-3
00:07:45.805       http://cunit.sourceforge.net/
00:07:45.805  
00:07:45.805  
00:07:45.805  Suite: app_suite
00:07:45.805    Test: notify ...passed
00:07:45.805  
00:07:45.805  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:45.805                suites      1      1    n/a      0        0
00:07:45.805                 tests      1      1      1      0        0
00:07:45.805               asserts     13     13     13      0      n/a
00:07:45.805  
00:07:45.805  Elapsed time =    0.000 seconds
00:07:46.065  
00:07:46.065  real	0m0.028s
00:07:46.065  user	0m0.018s
00:07:46.065  sys	0m0.011s
00:07:46.065   04:57:59 unittest.unittest_notify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:46.065   04:57:59 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x
00:07:46.065  ************************************
00:07:46.065  END TEST unittest_notify
00:07:46.065  ************************************
00:07:46.065   04:57:59 unittest -- unit/unittest.sh@230 -- # run_test unittest_nvme unittest_nvme
00:07:46.065   04:57:59 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:46.065   04:57:59 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:46.065   04:57:59 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:46.065  ************************************
00:07:46.065  START TEST unittest_nvme
00:07:46.065  ************************************
00:07:46.065   04:57:59 unittest.unittest_nvme -- common/autotest_common.sh@1129 -- # unittest_nvme
00:07:46.065   04:57:59 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:07:46.065  
00:07:46.065  
00:07:46.065       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.065       http://cunit.sourceforge.net/
00:07:46.065  
00:07:46.065  
00:07:46.065  Suite: nvme
00:07:46.065    Test: test_opc_data_transfer ...passed
00:07:46.065    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:07:46.065    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:07:46.065    Test: test_trid_parse_and_compare ...[2024-11-20 04:57:59.831169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1225:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:07:46.065  [2024-11-20 04:57:59.831580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:46.065  [2024-11-20 04:57:59.831687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1237:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:07:46.065  [2024-11-20 04:57:59.831740] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:46.065  [2024-11-20 04:57:59.831774] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1248:parse_next_key: *ERROR*: Key without value
00:07:46.065  [2024-11-20 04:57:59.831901] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1282:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:07:46.065  passed
00:07:46.065    Test: test_trid_trtype_str ...passed
00:07:46.065    Test: test_trid_adrfam_str ...passed
00:07:46.065    Test: test_nvme_ctrlr_probe ...[2024-11-20 04:57:59.832183] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 662:nvme_ctrlr_probe: *ERROR*: NVMe controller for SSD:  is being destructed
00:07:46.065  passed
00:07:46.065    Test: test_spdk_nvme_probe_ext ...[2024-11-20 04:57:59.832291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:46.065  [2024-11-20 04:57:59.832368] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:46.065  [2024-11-20 04:57:59.832428] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:46.065  [2024-11-20 04:57:59.832533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:07:46.065  [2024-11-20 04:57:59.832585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:07:46.065  passed
00:07:46.065    Test: test_spdk_nvme_connect ...[2024-11-20 04:57:59.832677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1036:spdk_nvme_connect: *ERROR*: No transport ID specified
00:07:46.065  passed
00:07:46.065    Test: test_nvme_ctrlr_probe_internal ...[2024-11-20 04:57:59.833106] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:46.065  [2024-11-20 04:57:59.833319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:07:46.065  [2024-11-20 04:57:59.833382] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:07:46.065  passed
00:07:46.065    Test: test_nvme_init_controllers ...[2024-11-20 04:57:59.833463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:07:46.065  passed
00:07:46.065    Test: test_nvme_driver_init ...[2024-11-20 04:57:59.833588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 576:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:07:46.065  [2024-11-20 04:57:59.833659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:07:46.065  [2024-11-20 04:57:59.947046] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 594:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:07:46.065  passed
00:07:46.065    Test: test_spdk_nvme_detach ...[2024-11-20 04:57:59.947239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 616:nvme_driver_init: *ERROR*: failed to initialize mutex
00:07:46.065  passed
00:07:46.065    Test: test_nvme_completion_poll_cb ...passed
00:07:46.065    Test: test_nvme_user_copy_cmd_complete ...passed
00:07:46.065    Test: test_nvme_allocate_request_null ...passed
00:07:46.065    Test: test_nvme_allocate_request ...passed
00:07:46.065    Test: test_nvme_free_request ...passed
00:07:46.065    Test: test_nvme_allocate_request_user_copy ...passed
00:07:46.065    Test: test_nvme_robust_mutex_init_shared ...passed
00:07:46.065    Test: test_nvme_request_check_timeout ...passed
00:07:46.065    Test: test_nvme_wait_for_completion ...passed
00:07:46.065    Test: test_spdk_nvme_parse_func ...passed
00:07:46.065    Test: test_spdk_nvme_detach_async ...passed
00:07:46.065    Test: test_nvme_parse_addr ...[2024-11-20 04:57:59.948743] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1682:nvme_parse_addr: *ERROR*: getaddrinfo failed: Name or service not known (-2)
00:07:46.065  passed
00:07:46.065  
00:07:46.065  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.065                suites      1      1    n/a      0        0
00:07:46.065                 tests     25     25     25      0        0
00:07:46.065               asserts    331    331    331      0      n/a
00:07:46.065  
00:07:46.065  Elapsed time =    0.007 seconds
00:07:46.065   04:57:59 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:07:46.065  
00:07:46.065  
00:07:46.065       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.065       http://cunit.sourceforge.net/
00:07:46.065  
00:07:46.065  
00:07:46.065  Suite: nvme_ctrlr
00:07:46.065    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-11-20 04:57:59.977431] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.065  passed
00:07:46.065    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-11-20 04:57:59.979271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.065  passed
00:07:46.065    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-11-20 04:57:59.980671] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.065  passed
00:07:46.065    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-11-20 04:57:59.981997] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  passed
00:07:46.066    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-11-20 04:57:59.983342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:57:59.984588] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 04:57:59.985926] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 04:57:59.987253] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:46.066    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-11-20 04:57:59.989930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:57:59.992260] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 04:57:59.993494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:46.066    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-11-20 04:57:59.996134] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:57:59.997486] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-11-20 04:57:59.999843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:07:46.066    Test: test_nvme_ctrlr_init_delay ...[2024-11-20 04:58:00.002213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  passed
00:07:46.066    Test: test_alloc_io_qpair_rr_1 ...[2024-11-20 04:58:00.003426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:58:00.003612] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:46.066  [2024-11-20 04:58:00.003791] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:46.066  passed
00:07:46.066    Test: test_ctrlr_get_default_ctrlr_opts ...[2024-11-20 04:58:00.003869] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:46.066  [2024-11-20 04:58:00.003916] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:07:46.066  passed
00:07:46.066    Test: test_ctrlr_get_default_io_qpair_opts ...passed
00:07:46.066    Test: test_alloc_io_qpair_wrr_1 ...passed
00:07:46.066    Test: test_alloc_io_qpair_wrr_2 ...[2024-11-20 04:58:00.004019] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:58:00.004179] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  [2024-11-20 04:58:00.004291] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:07:46.066  passed
00:07:46.066    Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-11-20 04:58:00.004551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5051:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_update_firmware invalid size!
00:07:46.066  [2024-11-20 04:58:00.004695] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:46.066  [2024-11-20 04:58:00.004812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5128:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] nvme_ctrlr_cmd_fw_commit failed!
00:07:46.066  [2024-11-20 04:58:00.004966] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:07:46.066  passed
00:07:46.066    Test: test_nvme_ctrlr_fail ...[2024-11-20 04:58:00.005104] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [, 0] in failed state.
00:07:46.066  passed
00:07:46.066    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed
00:07:46.066    Test: test_nvme_ctrlr_set_supported_features ...passed
00:07:46.066    Test: test_nvme_ctrlr_set_host_feature ...[2024-11-20 04:58:00.005331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.066  passed
00:07:46.066    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:07:46.066    Test: test_nvme_ctrlr_test_active_ns ...[2024-11-20 04:58:00.006741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.635  passed
00:07:46.636    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:07:46.636    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:07:46.636    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:07:46.636    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-11-20 04:58:00.316755] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-11-20 04:58:00.323972] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-11-20 04:58:00.325227] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  [2024-11-20 04:58:00.325348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3039:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [, 0] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:07:46.636  passed
00:07:46.636    Test: test_alloc_io_qpair_fail ...[2024-11-20 04:58:00.326579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_add_remove_process ...passed
00:07:46.636    Test: test_nvme_ctrlr_set_arbitration_feature ...[2024-11-20 04:58:00.326690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 505:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [, 0] nvme_transport_ctrlr_connect_io_qpair() failed
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_set_state ...passed
00:07:46.636    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-11-20 04:58:00.326854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1554:_nvme_ctrlr_set_state: *ERROR*: [, 0] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:07:46.636  [2024-11-20 04:58:00.326905] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-11-20 04:58:00.344934] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_ns_mgmt ...[2024-11-20 04:58:00.377751] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_reset ...[2024-11-20 04:58:00.379229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_aer_callback ...[2024-11-20 04:58:00.379753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-11-20 04:58:00.381300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:07:46.636    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:07:46.636    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-11-20 04:58:00.382907] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:07:46.636    Test: test_nvme_ctrlr_ana_resize ...[2024-11-20 04:58:00.384304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:07:46.636    Test: test_nvme_transport_ctrlr_ready ...[2024-11-20 04:58:00.385965] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4194:nvme_ctrlr_process_init: *ERROR*: [, 0] Transport controller ready step failed: rc -1
00:07:46.636  passed
00:07:46.636    Test: test_nvme_ctrlr_disable ...[2024-11-20 04:58:00.386034] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4246:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr operation failed with error: -1, ctrlr state: 53 (error)
00:07:46.636  [2024-11-20 04:58:00.386095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4314:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:07:46.636  passed
00:07:46.636    Test: test_nvme_numa_id ...passed
00:07:46.636  
00:07:46.636  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.636                suites      1      1    n/a      0        0
00:07:46.636                 tests     45     45     45      0        0
00:07:46.636               asserts  10448  10448  10448      0      n/a
00:07:46.636  
00:07:46.636  Elapsed time =    0.368 seconds
00:07:46.636   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:07:46.636  
00:07:46.636  
00:07:46.636       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.636       http://cunit.sourceforge.net/
00:07:46.636  
00:07:46.636  
00:07:46.636  Suite: nvme_ctrlr_cmd
00:07:46.636    Test: test_get_log_pages ...passed
00:07:46.636    Test: test_set_feature_cmd ...passed
00:07:46.636    Test: test_set_feature_ns_cmd ...passed
00:07:46.636    Test: test_get_feature_cmd ...passed
00:07:46.636    Test: test_get_feature_ns_cmd ...passed
00:07:46.636    Test: test_abort_cmd ...passed
00:07:46.636    Test: test_set_host_id_cmds ...[2024-11-20 04:58:00.422667] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:07:46.636  passed
00:07:46.636    Test: test_io_cmd_raw_no_payload_build ...passed
00:07:46.636    Test: test_io_raw_cmd ...passed
00:07:46.636    Test: test_io_raw_cmd_with_md ...passed
00:07:46.636    Test: test_namespace_attach ...passed
00:07:46.636    Test: test_namespace_detach ...passed
00:07:46.636    Test: test_namespace_create ...passed
00:07:46.636    Test: test_namespace_delete ...passed
00:07:46.636    Test: test_doorbell_buffer_config ...passed
00:07:46.636    Test: test_format_nvme ...passed
00:07:46.636    Test: test_fw_commit ...passed
00:07:46.636    Test: test_fw_image_download ...passed
00:07:46.636    Test: test_sanitize ...passed
00:07:46.636    Test: test_directive ...passed
00:07:46.636    Test: test_nvme_request_add_abort ...passed
00:07:46.636    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:07:46.636    Test: test_nvme_ctrlr_cmd_identify ...passed
00:07:46.636    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:07:46.636  
00:07:46.636  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.636                suites      1      1    n/a      0        0
00:07:46.636                 tests     24     24     24      0        0
00:07:46.636               asserts    198    198    198      0      n/a
00:07:46.636  
00:07:46.636  Elapsed time =    0.000 seconds
00:07:46.636   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:07:46.636  
00:07:46.636  
00:07:46.636       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.636       http://cunit.sourceforge.net/
00:07:46.636  
00:07:46.636  
00:07:46.636  Suite: nvme_ctrlr_cmd
00:07:46.636    Test: test_geometry_cmd ...passed
00:07:46.636    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:07:46.636  
00:07:46.636  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.636                suites      1      1    n/a      0        0
00:07:46.636                 tests      2      2      2      0        0
00:07:46.636               asserts      7      7      7      0      n/a
00:07:46.636  
00:07:46.636  Elapsed time =    0.000 seconds
00:07:46.636   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:07:46.636  
00:07:46.636  
00:07:46.636       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.636       http://cunit.sourceforge.net/
00:07:46.636  
00:07:46.636  
00:07:46.636  Suite: nvme
00:07:46.636    Test: test_nvme_ns_construct ...passed
00:07:46.636    Test: test_nvme_ns_uuid ...passed
00:07:46.636    Test: test_nvme_ns_csi ...passed
00:07:46.636    Test: test_nvme_ns_data ...passed
00:07:46.636    Test: test_nvme_ns_set_identify_data ...passed
00:07:46.636    Test: test_spdk_nvme_ns_get_values ...passed
00:07:46.636    Test: test_spdk_nvme_ns_is_active ...passed
00:07:46.636    Test: spdk_nvme_ns_supports ...passed
00:07:46.636    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:07:46.636    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:07:46.636    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:07:46.636    Test: test_nvme_ns_find_id_desc ...passed
00:07:46.636  
00:07:46.636  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.636                suites      1      1    n/a      0        0
00:07:46.636                 tests     12     12     12      0        0
00:07:46.636               asserts     95     95     95      0      n/a
00:07:46.636  
00:07:46.636  Elapsed time =    0.001 seconds
00:07:46.636   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:07:46.636  
00:07:46.636  
00:07:46.636       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.636       http://cunit.sourceforge.net/
00:07:46.636  
00:07:46.636  
00:07:46.636  Suite: nvme_ns_cmd
00:07:46.636    Test: split_test ...passed
00:07:46.637    Test: split_test2 ...passed
00:07:46.637    Test: split_test3 ...passed
00:07:46.637    Test: split_test4 ...passed
00:07:46.637    Test: test_nvme_ns_cmd_flush ...passed
00:07:46.637    Test: test_nvme_ns_cmd_dataset_management ...passed
00:07:46.637    Test: test_nvme_ns_cmd_copy ...passed
00:07:46.637    Test: test_io_flags ...[2024-11-20 04:58:00.516696] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:07:46.637  passed
00:07:46.637    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:07:46.637    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:07:46.637    Test: test_nvme_ns_cmd_reservation_register ...passed
00:07:46.637    Test: test_nvme_ns_cmd_reservation_release ...passed
00:07:46.637    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:07:46.637    Test: test_nvme_ns_cmd_reservation_report ...passed
00:07:46.637    Test: test_cmd_child_request ...passed
00:07:46.637    Test: test_nvme_ns_cmd_readv ...passed
00:07:46.637    Test: test_nvme_ns_cmd_readv_sgl ...[2024-11-20 04:58:00.517838] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 390:_nvme_ns_cmd_split_request_sgl: *ERROR*: Unable to send I/O. Would require more than the supported number of SGL Elements.passed
00:07:46.637    Test: test_nvme_ns_cmd_read_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_writev ...[2024-11-20 04:58:00.518175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 291:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:07:46.637  passed
00:07:46.637    Test: test_nvme_ns_cmd_write_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_comparev ...passed
00:07:46.637    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:07:46.637    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:07:46.637    Test: test_nvme_ns_cmd_setup_request ...passed
00:07:46.637    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:07:46.637    Test: test_spdk_nvme_ns_cmd_writev_ext ...passed
00:07:46.637    Test: test_spdk_nvme_ns_cmd_readv_ext ...[2024-11-20 04:58:00.519924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:46.637  passed
00:07:46.637    Test: test_nvme_ns_cmd_verify ...passed
00:07:46.637    Test: test_nvme_ns_cmd_io_mgmt_send ...[2024-11-20 04:58:00.520050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:07:46.637  passed
00:07:46.637    Test: test_nvme_ns_cmd_io_mgmt_recv ...passed
00:07:46.637  
00:07:46.637  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.637                suites      1      1    n/a      0        0
00:07:46.637                 tests     33     33     33      0        0
00:07:46.637               asserts    569    569    569      0      n/a
00:07:46.637  
00:07:46.637  Elapsed time =    0.005 seconds
00:07:46.637   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:07:46.637  
00:07:46.637  
00:07:46.637       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.637       http://cunit.sourceforge.net/
00:07:46.637  
00:07:46.637  
00:07:46.637  Suite: nvme_ns_cmd
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:07:46.637    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:07:46.637  
00:07:46.637  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.637                suites      1      1    n/a      0        0
00:07:46.637                 tests     12     12     12      0        0
00:07:46.637               asserts    123    123    123      0      n/a
00:07:46.637  
00:07:46.637  Elapsed time =    0.001 seconds
00:07:46.637   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:07:46.637  
00:07:46.637  
00:07:46.637       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.637       http://cunit.sourceforge.net/
00:07:46.637  
00:07:46.637  
00:07:46.637  Suite: nvme_qpair
00:07:46.637    Test: test3 ...passed
00:07:46.637    Test: test_ctrlr_failed ...passed
00:07:46.637    Test: struct_packing ...passed
00:07:46.896    Test: test_nvme_qpair_process_completions ...[2024-11-20 04:58:00.590776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:46.896  [2024-11-20 04:58:00.591089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:46.896  [2024-11-20 04:58:00.591158] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 0
00:07:46.896  passed
00:07:46.896    Test: test_nvme_completion_is_retry ...passed
00:07:46.896    Test: test_get_status_string ...passed
00:07:46.896    Test: test_nvme_qpair_add_cmd_error_injection ...[2024-11-20 04:58:00.591257] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (No such device or address) on qpair id 1
00:07:46.896  passed
00:07:46.896    Test: test_nvme_qpair_submit_request ...passed
00:07:46.896    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:07:46.896    Test: test_nvme_qpair_manual_complete_request ...passed
00:07:46.897    Test: test_nvme_qpair_init_deinit ...[2024-11-20 04:58:00.591726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:07:46.897  passed
00:07:46.897    Test: test_nvme_get_sgl_print_info ...passed
00:07:46.897  
00:07:46.897  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.897                suites      1      1    n/a      0        0
00:07:46.897                 tests     12     12     12      0        0
00:07:46.897               asserts    154    154    154      0      n/a
00:07:46.897  
00:07:46.897  Elapsed time =    0.001 seconds
00:07:46.897   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:07:46.897  
00:07:46.897  
00:07:46.897       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.897       http://cunit.sourceforge.net/
00:07:46.897  
00:07:46.897  
00:07:46.897  Suite: nvme_pcie
00:07:46.897    Test: test_prp_list_append ...[2024-11-20 04:58:00.625265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:46.897  [2024-11-20 04:58:00.625705] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1271:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:07:46.897  [2024-11-20 04:58:00.625765] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1261:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:07:46.897  [2024-11-20 04:58:00.626016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:46.897  passed
00:07:46.897    Test: test_nvme_pcie_hotplug_monitor ...[2024-11-20 04:58:00.626121] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:07:46.897  passed
00:07:46.897    Test: test_shadow_doorbell_update ...passed
00:07:46.897    Test: test_build_contig_hw_sgl_request ...passed
00:07:46.897    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:07:46.897    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:07:46.897    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:07:46.897    Test: test_nvme_pcie_qpair_build_contig_request ...passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_regs_get_set ...passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_map_io_cmb ...[2024-11-20 04:58:00.626302] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:07:46.897  [2024-11-20 04:58:00.626409] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:07:46.897  passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...[2024-11-20 04:58:00.626519] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:07:46.897  passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_config_pmr ...passed
00:07:46.897    Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-11-20 04:58:00.626614] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:07:46.897  [2024-11-20 04:58:00.626678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:07:46.897  passed
00:07:46.897  
00:07:46.897  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.897                suites      1      1    n/a      0        0
00:07:46.897                 tests     14     14     14      0        0
00:07:46.897               asserts    235    235    235      0      n/a
00:07:46.897  
00:07:46.897  Elapsed time =    0.002 seconds
00:07:46.897   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:07:46.897  
00:07:46.897  
00:07:46.897       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.897       http://cunit.sourceforge.net/
00:07:46.897  
00:07:46.897  
00:07:46.897  Suite: nvme_ns_cmd
00:07:46.897    Test: nvme_poll_group_create_test ...passed
00:07:46.897    Test: nvme_poll_group_add_remove_test ...[2024-11-20 04:58:00.657953] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_poll_group.c: 188:spdk_nvme_poll_group_add: *ERROR*: Queue pair without interrupts cannot be added to poll group
00:07:46.897  passed
00:07:46.897    Test: nvme_poll_group_process_completions ...passed
00:07:46.897    Test: nvme_poll_group_destroy_test ...passed
00:07:46.897    Test: nvme_poll_group_get_free_stats ...passed
00:07:46.897  
00:07:46.897  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.897                suites      1      1    n/a      0        0
00:07:46.897                 tests      5      5      5      0        0
00:07:46.897               asserts    103    103    103      0      n/a
00:07:46.897  
00:07:46.897  Elapsed time =    0.002 seconds
00:07:46.897   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:07:46.897  
00:07:46.897  
00:07:46.897       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.897       http://cunit.sourceforge.net/
00:07:46.897  
00:07:46.897  
00:07:46.897  Suite: nvme_quirks
00:07:46.897    Test: test_nvme_quirks_striping ...passed
00:07:46.897  
00:07:46.897  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:46.897                suites      1      1    n/a      0        0
00:07:46.897                 tests      1      1      1      0        0
00:07:46.897               asserts      5      5      5      0      n/a
00:07:46.897  
00:07:46.897  Elapsed time =    0.000 seconds
00:07:46.897   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:07:46.897  
00:07:46.897  
00:07:46.897       CUnit - A unit testing framework for C - Version 2.1-3
00:07:46.897       http://cunit.sourceforge.net/
00:07:46.897  
00:07:46.897  
00:07:46.897  Suite: nvme_tcp
00:07:46.897    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:07:46.897    Test: test_nvme_tcp_build_iovs ...passed
00:07:46.897    Test: test_nvme_tcp_build_sgl_request ...[2024-11-20 04:58:00.720330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x7ffff1485cb0, and the iovcnt=16, remaining_size=28672
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed
00:07:46.897    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:07:46.897    Test: test_nvme_tcp_req_complete_safe ...passed
00:07:46.897    Test: test_nvme_tcp_req_get ...passed
00:07:46.897    Test: test_nvme_tcp_req_init ...passed
00:07:46.897    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:07:46.897    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:07:46.897    Test: test_nvme_tcp_qpair_set_recv_state ...[2024-11-20 04:58:00.722463] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff14879f0 is same with the state(7) to be set
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_alloc_reqs ...passed
00:07:46.897    Test: test_nvme_tcp_qpair_send_h2c_term_req ...[2024-11-20 04:58:00.723324] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1486ba0 is same with the state(6) to be set
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_pdu_ch_handle ...[2024-11-20 04:58:00.723734] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1133:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x7ffff1487740
00:07:46.897  [2024-11-20 04:58:00.723959] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1192:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:07:46.897  [2024-11-20 04:58:00.724170] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.724389] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1143:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:07:46.897  [2024-11-20 04:58:00.724604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.724777] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:07:46.897  [2024-11-20 04:58:00.724936] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.725107] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.725338] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.725526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.725702] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.725877] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1487060 is same with the state(6) to be set
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_qpair_connect_sock ...[2024-11-20 04:58:00.726359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:07:46.897  [2024-11-20 04:58:00.726550] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:46.897  [2024-11-20 04:58:00.726930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_qpair_icreq_send ...passed
00:07:46.897    Test: test_nvme_tcp_c2h_payload_handle ...[2024-11-20 04:58:00.727636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffff1487280): PDU Sequence Error
00:07:46.897  passed
00:07:46.897    Test: test_nvme_tcp_icresp_handle ...[2024-11-20 04:58:00.728062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1476:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:07:46.897  [2024-11-20 04:58:00.728249] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1483:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:07:46.897  [2024-11-20 04:58:00.728438] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1486bb0 is same with the state(6) to be set
00:07:46.897  [2024-11-20 04:58:00.728607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1492:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:07:46.897  [2024-11-20 04:58:00.728772] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1486bb0 is same with the state(6) to be set
00:07:46.898  [2024-11-20 04:58:00.728962] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff1486bb0 is same with the state(0) to be set
00:07:46.898  passed
00:07:46.898    Test: test_nvme_tcp_pdu_payload_handle ...[2024-11-20 04:58:00.729326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1300:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x7ffff1487740): PDU Sequence Error
00:07:46.898  passed
00:07:46.898    Test: test_nvme_tcp_capsule_resp_hdr_handle ...[2024-11-20 04:58:00.729738] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1553:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x7ffff1485e70
00:07:46.898  passed
00:07:46.898    Test: test_nvme_tcp_ctrlr_connect_qpair ...passed
00:07:46.898    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-11-20 04:58:00.730445] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 357:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x7ffff14854f0, errno=0, rc=0
00:07:46.898  [2024-11-20 04:58:00.730630] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff14854f0 is same with the state(6) to be set
00:07:46.898  [2024-11-20 04:58:00.730834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff14854f0 is same with the state(6) to be set
00:07:46.898  [2024-11-20 04:58:00.731044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffff14854f0 (0): Success
00:07:46.898  [2024-11-20 04:58:00.731229] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ffff14854f0 (0): Success
00:07:46.898  passed
00:07:46.898    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-11-20 04:58:00.850402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:46.898  [2024-11-20 04:58:00.850803] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:46.898  passed
00:07:46.898    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed
00:07:46.898    Test: test_nvme_tcp_poll_group_get_stats ...[2024-11-20 04:58:00.851585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:46.898  [2024-11-20 04:58:00.851747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:47.157  passed
00:07:47.157    Test: test_nvme_tcp_ctrlr_construct ...[2024-11-20 04:58:00.852239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2436:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:47.157  [2024-11-20 04:58:00.852415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:47.157  [2024-11-20 04:58:00.852650] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:07:47.157  [2024-11-20 04:58:00.852820] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:47.157  [2024-11-20 04:58:00.853051] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x615000007d80 with addr=192.168.1.78, port=23
00:07:47.157  [2024-11-20 04:58:00.853250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:07:47.157  passed
00:07:47.157    Test: test_nvme_tcp_qpair_submit_request ...[2024-11-20 04:58:00.853716] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 790:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x614000000c40, and the iovcnt=1, remaining_size=1024
00:07:47.157  [2024-11-20 04:58:00.853889] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 977:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:07:47.157  passed
00:07:47.157  
00:07:47.157  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.157                suites      1      1    n/a      0        0
00:07:47.157                 tests     27     27     27      0        0
00:07:47.157               asserts    624    624    624      0      n/a
00:07:47.157  
00:07:47.157  Elapsed time =    0.126 seconds
00:07:47.157   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:07:47.157  
00:07:47.157  
00:07:47.157       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.157       http://cunit.sourceforge.net/
00:07:47.157  
00:07:47.157  
00:07:47.157  Suite: nvme_transport
00:07:47.157    Test: test_nvme_get_transport ...passed
00:07:47.157    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:07:47.157    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:07:47.157    Test: test_nvme_transport_poll_group_add_remove ...passed
00:07:47.157    Test: test_ctrlr_get_memory_domains ...passed
00:07:47.157  
00:07:47.157  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.157                suites      1      1    n/a      0        0
00:07:47.157                 tests      5      5      5      0        0
00:07:47.157               asserts     28     28     28      0      n/a
00:07:47.157  
00:07:47.157  Elapsed time =    0.000 seconds
00:07:47.157   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:07:47.157  
00:07:47.157  
00:07:47.157       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.157       http://cunit.sourceforge.net/
00:07:47.157  
00:07:47.157  
00:07:47.157  Suite: nvme_io_msg
00:07:47.157    Test: test_nvme_io_msg_send ...passed
00:07:47.157    Test: test_nvme_io_msg_process ...passed
00:07:47.157    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:07:47.157  
00:07:47.157  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.157                suites      1      1    n/a      0        0
00:07:47.157                 tests      3      3      3      0        0
00:07:47.157               asserts     56     56     56      0      n/a
00:07:47.157  
00:07:47.157  Elapsed time =    0.000 seconds
00:07:47.157   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:07:47.157  
00:07:47.157  
00:07:47.157       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.157       http://cunit.sourceforge.net/
00:07:47.157  
00:07:47.157  
00:07:47.157  Suite: nvme_pcie_common
00:07:47.157    Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-11-20 04:58:00.960460] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 112:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:07:47.157  passed
00:07:47.157    Test: test_nvme_pcie_qpair_construct_destroy ...passed
00:07:47.157    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:07:47.157    Test: test_nvme_pcie_ctrlr_connect_qpair ...[2024-11-20 04:58:00.961238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 541:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:07:47.157  [2024-11-20 04:58:00.961406] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 494:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:07:47.157  [2024-11-20 04:58:00.961457] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 588:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:07:47.157  passed
00:07:47.157    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...passed
00:07:47.157    Test: test_nvme_pcie_poll_group_get_stats ...[2024-11-20 04:58:00.961930] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:47.157  [2024-11-20 04:58:00.961993] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:47.157  passed
00:07:47.157  
00:07:47.157  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.157                suites      1      1    n/a      0        0
00:07:47.157                 tests      6      6      6      0        0
00:07:47.157               asserts    148    148    148      0      n/a
00:07:47.157  
00:07:47.157  Elapsed time =    0.002 seconds
00:07:47.157   04:58:00 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:07:47.157  
00:07:47.157  
00:07:47.157       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.157       http://cunit.sourceforge.net/
00:07:47.157  
00:07:47.157  
00:07:47.157  Suite: nvme_fabric
00:07:47.157    Test: test_nvme_fabric_prop_set_cmd ...passed
00:07:47.157    Test: test_nvme_fabric_prop_get_cmd ...passed
00:07:47.157    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:07:47.157    Test: test_nvme_fabric_discover_probe ...passed
00:07:47.157    Test: test_nvme_fabric_qpair_connect ...[2024-11-20 04:58:00.994973] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -125, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:07:47.157  passed
00:07:47.157  
00:07:47.157  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.158                suites      1      1    n/a      0        0
00:07:47.158                 tests      5      5      5      0        0
00:07:47.158               asserts     60     60     60      0      n/a
00:07:47.158  
00:07:47.158  Elapsed time =    0.001 seconds
00:07:47.158   04:58:01 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:07:47.158  
00:07:47.158  
00:07:47.158       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.158       http://cunit.sourceforge.net/
00:07:47.158  
00:07:47.158  
00:07:47.158  Suite: nvme_opal
00:07:47.158    Test: test_opal_nvme_security_recv_send_done ...passed
00:07:47.158    Test: test_opal_add_short_atom_header ...passed
00:07:47.158  
00:07:47.158  [2024-11-20 04:58:01.025780] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:07:47.158  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:47.158                suites      1      1    n/a      0        0
00:07:47.158                 tests      2      2      2      0        0
00:07:47.158               asserts     22     22     22      0      n/a
00:07:47.158  
00:07:47.158  Elapsed time =    0.000 seconds
00:07:47.158  
00:07:47.158  real	0m1.228s
00:07:47.158  user	0m0.666s
00:07:47.158  sys	0m0.407s
00:07:47.158   04:58:01 unittest.unittest_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:47.158   04:58:01 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:47.158  ************************************
00:07:47.158  END TEST unittest_nvme
00:07:47.158  ************************************
00:07:47.158   04:58:01 unittest -- unit/unittest.sh@231 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:47.158   04:58:01 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:47.158   04:58:01 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:47.158   04:58:01 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:47.158  ************************************
00:07:47.158  START TEST unittest_log
00:07:47.158  ************************************
00:07:47.158   04:58:01 unittest.unittest_log -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:07:47.158  
00:07:47.158  
00:07:47.158       CUnit - A unit testing framework for C - Version 2.1-3
00:07:47.158       http://cunit.sourceforge.net/
00:07:47.158  
00:07:47.158  
00:07:47.158  Suite: log
00:07:47.158    Test: log_test ...[2024-11-20 04:58:01.109911] log_ut.c:  56:log_test: *WARNING*: log warning unit test
00:07:47.158  [2024-11-20 04:58:01.110248] log_ut.c:  57:log_test: *DEBUG*: log test
00:07:47.158  log dump test:
00:07:47.158  00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:07:47.158  passed
00:07:47.158    Test: deprecation ...spdk dump test:
00:07:47.158  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:07:47.158  spdk dump test:
00:07:47.158  00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:07:47.158  00000010  65 20 63 68 61 72 73                              e chars
00:07:48.536  passed
00:07:48.536    Test: log_ext_test ...passed
00:07:48.536  
00:07:48.536  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.536                suites      1      1    n/a      0        0
00:07:48.536                 tests      3      3      3      0        0
00:07:48.536               asserts     77     77     77      0      n/a
00:07:48.536  
00:07:48.536  Elapsed time =    0.001 seconds
00:07:48.536  
00:07:48.536  real	0m1.032s
00:07:48.536  user	0m0.024s
00:07:48.536  sys	0m0.008s
00:07:48.536   04:58:02 unittest.unittest_log -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.536   04:58:02 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x
00:07:48.536  ************************************
00:07:48.536  END TEST unittest_log
00:07:48.536  ************************************
00:07:48.536   04:58:02 unittest -- unit/unittest.sh@232 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:48.536   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.536   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.536   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.536  ************************************
00:07:48.536  START TEST unittest_lvol
00:07:48.536  ************************************
00:07:48.536   04:58:02 unittest.unittest_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:07:48.536  
00:07:48.536  
00:07:48.536       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.536       http://cunit.sourceforge.net/
00:07:48.536  
00:07:48.536  
00:07:48.536  Suite: lvol
00:07:48.536    Test: lvs_init_unload_success ...[2024-11-20 04:58:02.202447] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:07:48.536  passed
00:07:48.536    Test: lvs_init_destroy_success ...[2024-11-20 04:58:02.203038] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:07:48.536  passed
00:07:48.536    Test: lvs_init_opts_success ...passed
00:07:48.536    Test: lvs_unload_lvs_is_null_fail ...passed
00:07:48.536    Test: lvs_names ...[2024-11-20 04:58:02.203346] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:07:48.536  [2024-11-20 04:58:02.203454] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified.
00:07:48.536  [2024-11-20 04:58:02.203544] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator.
00:07:48.536  [2024-11-20 04:58:02.203798] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:07:48.536  passed
00:07:48.536    Test: lvol_create_destroy_success ...passed
00:07:48.536    Test: lvol_create_fail ...[2024-11-20 04:58:02.204540] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:07:48.536  [2024-11-20 04:58:02.204729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist
00:07:48.536  passed
00:07:48.536    Test: lvol_destroy_fail ...[2024-11-20 04:58:02.205142] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:07:48.536  passed
00:07:48.536    Test: lvol_close ...[2024-11-20 04:58:02.205453] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist
00:07:48.536  [2024-11-20 04:58:02.205544] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:07:48.536  passed
00:07:48.536    Test: lvol_resize ...passed
00:07:48.536    Test: lvol_set_read_only ...passed
00:07:48.536    Test: test_lvs_load ...[2024-11-20 04:58:02.206417] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:07:48.536  [2024-11-20 04:58:02.206491] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:07:48.536  passed
00:07:48.536    Test: lvols_load ...[2024-11-20 04:58:02.206756] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:48.536  [2024-11-20 04:58:02.206915] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:07:48.536  passed
00:07:48.536    Test: lvol_open ...passed
00:07:48.536    Test: lvol_snapshot ...passed
00:07:48.536    Test: lvol_snapshot_fail ...[2024-11-20 04:58:02.207830] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:07:48.536  passed
00:07:48.536    Test: lvol_clone ...passed
00:07:48.536    Test: lvol_clone_fail ...[2024-11-20 04:58:02.208443] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:07:48.536  passed
00:07:48.536    Test: lvol_iter_clones ...passed
00:07:48.536    Test: lvol_refcnt ...[2024-11-20 04:58:02.209048] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 53a2f283-0adb-43f5-b393-7808538e44bb because it is still open
00:07:48.536  passed
00:07:48.536    Test: lvol_names ...[2024-11-20 04:58:02.209292] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:48.536  [2024-11-20 04:58:02.209405] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:48.536  [2024-11-20 04:58:02.209673] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:07:48.536  passed
00:07:48.536    Test: lvol_create_thin_provisioned ...passed
00:07:48.536    Test: lvol_rename ...[2024-11-20 04:58:02.210189] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:48.536  [2024-11-20 04:58:02.210316] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:07:48.536  passed
00:07:48.536    Test: lvs_rename ...[2024-11-20 04:58:02.210606] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:07:48.536  passed
00:07:48.536    Test: lvol_inflate ...[2024-11-20 04:58:02.210877] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:48.536  passed
00:07:48.536    Test: lvol_decouple_parent ...[2024-11-20 04:58:02.211191] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:07:48.536  passed
00:07:48.536    Test: lvol_get_xattr ...passed
00:07:48.536    Test: lvol_esnap_reload ...passed
00:07:48.536    Test: lvol_esnap_create_bad_args ...[2024-11-20 04:58:02.211733] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:07:48.536  [2024-11-20 04:58:02.211802] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:07:48.536  [2024-11-20 04:58:02.211900] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1258:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:07:48.537  [2024-11-20 04:58:02.212053] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:07:48.537  passed
00:07:48.537    Test: lvol_esnap_create_delete ...[2024-11-20 04:58:02.212216] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:07:48.537  passed
00:07:48.537    Test: lvol_esnap_load_esnaps ...passed
00:07:48.537    Test: lvol_esnap_missing ...[2024-11-20 04:58:02.212551] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1832:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:07:48.537  [2024-11-20 04:58:02.212729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:48.537  [2024-11-20 04:58:02.212783] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:07:48.537  passed
00:07:48.537    Test: lvol_esnap_hotplug ...
00:07:48.537  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:07:48.537  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:07:48.537  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb returns -ENOMEM
00:07:48.537  [2024-11-20 04:58:02.213497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol f42f50fa-851d-484c-9410-803400b73307: failed to create esnap bs_dev: error -12
00:07:48.537  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:07:48.537  [2024-11-20 04:58:02.213731] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol d780ebaf-80e7-450d-bf0f-7b5258737410: failed to create esnap bs_dev: error -12
00:07:48.537  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:07:48.537  [2024-11-20 04:58:02.213873] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2062:lvs_esnap_degraded_hotplug: *ERROR*: lvol 330383fb-4f79-4446-a970-aa0031b2fedd: failed to create esnap bs_dev: error -12
00:07:48.537  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:07:48.537  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:07:48.537  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:07:48.537  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:07:48.537  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:07:48.537  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:07:48.537  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:07:48.537  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:07:48.537  passed
00:07:48.537    Test: lvol_get_by ...passed
00:07:48.537    Test: lvol_shallow_copy ...[2024-11-20 04:58:02.215085] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:07:48.537  [2024-11-20 04:58:02.215156] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol e33ec138-d06f-46a3-9afb-a69c527f4368 shallow copy, ext_dev must not be NULL
00:07:48.537  passed
00:07:48.537    Test: lvol_set_parent ...[2024-11-20 04:58:02.215498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL
00:07:48.537  [2024-11-20 04:58:02.215563] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL
00:07:48.537  passed
00:07:48.537    Test: lvol_set_external_parent ...[2024-11-20 04:58:02.215833] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL
00:07:48.537  [2024-11-20 04:58:02.215903] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL
00:07:48.537  [2024-11-20 04:58:02.215997] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID
00:07:48.537  passed
00:07:48.537  
00:07:48.537  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.537                suites      1      1    n/a      0        0
00:07:48.537                 tests     37     37     37      0        0
00:07:48.537               asserts   1505   1505   1505      0      n/a
00:07:48.537  
00:07:48.537  Elapsed time =    0.014 seconds
00:07:48.537  
00:07:48.537  real	0m0.052s
00:07:48.537  user	0m0.017s
00:07:48.537  sys	0m0.036s
00:07:48.537   04:58:02 unittest.unittest_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.537   04:58:02 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x
00:07:48.537  ************************************
00:07:48.537  END TEST unittest_lvol
00:07:48.537  ************************************
00:07:48.537   04:58:02 unittest -- unit/unittest.sh@233 -- # [[ y == y ]]
00:07:48.537   04:58:02 unittest -- unit/unittest.sh@234 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:48.537   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.537   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.537   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.537  ************************************
00:07:48.537  START TEST unittest_nvme_rdma
00:07:48.537  ************************************
00:07:48.537   04:58:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:07:48.537  
00:07:48.537  
00:07:48.537       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.537       http://cunit.sourceforge.net/
00:07:48.537  
00:07:48.537  
00:07:48.537  Suite: nvme_rdma
00:07:48.537    Test: test_nvme_rdma_build_sgl_request ...[2024-11-20 04:58:02.301726] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:07:48.537  [2024-11-20 04:58:02.302096] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1577:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:07:48.537    Test: test_nvme_rdma_build_contig_request ...[2024-11-20 04:58:02.302205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1633:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:07:48.537  [2024-11-20 04:58:02.302294] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1529:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_build_contig_inline_request ...passed
00:07:48.537    Test: test_nvme_rdma_create_reqs ...[2024-11-20 04:58:02.302411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 921:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_create_rsps ...[2024-11-20 04:58:02.302722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 839:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-11-20 04:58:02.302871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_poller_create ...passed
00:07:48.537    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:07:48.537    Test: test_nvme_rdma_ctrlr_construct ...[2024-11-20 04:58:02.302931] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1765:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:07:48.537  [2024-11-20 04:58:02.303112] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 447:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_req_put_and_get ...passed
00:07:48.537    Test: test_nvme_rdma_req_init ...passed
00:07:48.537    Test: test_nvme_rdma_validate_cm_event ...passed
00:07:48.537    Test: test_nvme_rdma_qpair_init ...passed
00:07:48.537    Test: test_nvme_rdma_qpair_submit_request ...[2024-11-20 04:58:02.303442] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:07:48.537  [2024-11-20 04:58:02.303504] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 538:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:07:48.537  passed
00:07:48.537    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:07:48.537    Test: test_rdma_get_memory_translation ...passed
00:07:48.537    Test: test_get_rdma_qpair_from_wc ...passed
00:07:48.537    Test: test_nvme_rdma_ctrlr_get_max_sges ...passed
00:07:48.537    Test: test_nvme_rdma_poll_group_get_stats ...[2024-11-20 04:58:02.303644] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:07:48.537  [2024-11-20 04:58:02.303701] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1390:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:07:48.537  [2024-11-20 04:58:02.303809] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:48.537  [2024-11-20 04:58:02.303848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3262:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:07:48.537  passed
00:07:48.537    Test: test_nvme_rdma_qpair_set_poller ...[2024-11-20 04:58:02.304016] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:48.537  [2024-11-20 04:58:02.304063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:07:48.537  [2024-11-20 04:58:02.304108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff44d64340 on poll group 0x60c000000040
00:07:48.537  [2024-11-20 04:58:02.304150] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2965:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 2.
00:07:48.537  [2024-11-20 04:58:02.304212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3011:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device (nil)
00:07:48.537  [2024-11-20 04:58:02.304248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 644:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x7fff44d64340 on poll group 0x60c000000040
00:07:48.537  [2024-11-20 04:58:02.304330] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 622:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:48.537  passed
00:07:48.537  
00:07:48.537  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.537                suites      1      1    n/a      0        0
00:07:48.537                 tests     21     21     21      0        0
00:07:48.537               asserts    395    395    395      0      n/a
00:07:48.537  
00:07:48.537  Elapsed time =    0.003 seconds
00:07:48.537  
00:07:48.537  real	0m0.036s
00:07:48.537  user	0m0.017s
00:07:48.537  sys	0m0.020s
00:07:48.537   04:58:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.537   04:58:02 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:48.537  ************************************
00:07:48.537  END TEST unittest_nvme_rdma
00:07:48.537  ************************************
00:07:48.538   04:58:02 unittest -- unit/unittest.sh@235 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.538  ************************************
00:07:48.538  START TEST unittest_nvmf_transport
00:07:48.538  ************************************
00:07:48.538   04:58:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:07:48.538  
00:07:48.538  
00:07:48.538       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.538       http://cunit.sourceforge.net/
00:07:48.538  
00:07:48.538  
00:07:48.538  Suite: nvmf
00:07:48.538    Test: test_spdk_nvmf_transport_create ...[2024-11-20 04:58:02.392562] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:07:48.538  [2024-11-20 04:58:02.392901] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:07:48.538  [2024-11-20 04:58:02.392984] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 275:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:07:48.538  [2024-11-20 04:58:02.393138] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 258:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:07:48.538  passed
00:07:48.538    Test: test_nvmf_transport_poll_group_create ...passed
00:07:48.538    Test: test_spdk_nvmf_transport_opts_init ...[2024-11-20 04:58:02.393409] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 799:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:07:48.538  [2024-11-20 04:58:02.393510] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 804:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:07:48.538  [2024-11-20 04:58:02.393565] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 809:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:07:48.538  passed
00:07:48.538    Test: test_spdk_nvmf_transport_listen_ext ...passed
00:07:48.538  
00:07:48.538  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.538                suites      1      1    n/a      0        0
00:07:48.538                 tests      4      4      4      0        0
00:07:48.538               asserts     49     49     49      0      n/a
00:07:48.538  
00:07:48.538  Elapsed time =    0.001 seconds
00:07:48.538  
00:07:48.538  real	0m0.043s
00:07:48.538  user	0m0.024s
00:07:48.538  sys	0m0.019s
00:07:48.538   04:58:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.538   04:58:02 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x
00:07:48.538  ************************************
00:07:48.538  END TEST unittest_nvmf_transport
00:07:48.538  ************************************
00:07:48.538   04:58:02 unittest -- unit/unittest.sh@236 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.538   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.538  ************************************
00:07:48.538  START TEST unittest_rdma
00:07:48.538  ************************************
00:07:48.538   04:58:02 unittest.unittest_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:07:48.538  
00:07:48.538  
00:07:48.538       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.538       http://cunit.sourceforge.net/
00:07:48.538  
00:07:48.538  
00:07:48.538  Suite: rdma_common
00:07:48.538    Test: test_spdk_rdma_pd ...[2024-11-20 04:58:02.483809] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:48.538  [2024-11-20 04:58:02.484180] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:07:48.538  passed
00:07:48.538  
00:07:48.538  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.538                suites      1      1    n/a      0        0
00:07:48.538                 tests      1      1      1      0        0
00:07:48.538               asserts     31     31     31      0      n/a
00:07:48.538  
00:07:48.538  Elapsed time =    0.001 seconds
00:07:48.797  
00:07:48.797  real	0m0.032s
00:07:48.797  user	0m0.022s
00:07:48.797  sys	0m0.010s
00:07:48.797   04:58:02 unittest.unittest_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.797   04:58:02 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:48.797  ************************************
00:07:48.797  END TEST unittest_rdma
00:07:48.797  ************************************
00:07:48.797   04:58:02 unittest -- unit/unittest.sh@237 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:48.797   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.797   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.797   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.797  ************************************
00:07:48.797  START TEST unittest_nvmf_rdma
00:07:48.797  ************************************
00:07:48.797   04:58:02 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:07:48.797  
00:07:48.797  
00:07:48.797       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.797       http://cunit.sourceforge.net/
00:07:48.797  
00:07:48.797  
00:07:48.797  Suite: nvmf
00:07:48.797    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-11-20 04:58:02.573963] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:07:48.797  [2024-11-20 04:58:02.574439] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:07:48.797  [2024-11-20 04:58:02.574625] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:07:48.797  passed
00:07:48.797    Test: test_spdk_nvmf_rdma_request_process ...passed
00:07:48.797    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:07:48.797    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:07:48.797    Test: test_nvmf_rdma_opts_init ...passed
00:07:48.797    Test: test_nvmf_rdma_request_free_data ...passed
00:07:48.797    Test: test_nvmf_rdma_resources_create ...passed
00:07:48.797    Test: test_nvmf_rdma_qpair_compare ...passed
00:07:48.798    Test: test_nvmf_rdma_resize_cq ...[2024-11-20 04:58:02.578830] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:07:48.798  Using CQ of insufficient size may lead to CQ overrun
00:07:48.798  [2024-11-20 04:58:02.579074] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:07:48.798  [2024-11-20 04:58:02.579261] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 968:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 2: No such file or directory
00:07:48.798  passed
00:07:48.798  
00:07:48.798  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:48.798                suites      1      1    n/a      0        0
00:07:48.798                 tests      9      9      9      0        0
00:07:48.798               asserts    579    579    579      0      n/a
00:07:48.798  
00:07:48.798  Elapsed time =    0.004 seconds
00:07:48.798  
00:07:48.798  real	0m0.044s
00:07:48.798  user	0m0.023s
00:07:48.798  sys	0m0.019s
00:07:48.798   04:58:02 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:48.798   04:58:02 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:07:48.798  ************************************
00:07:48.798  END TEST unittest_nvmf_rdma
00:07:48.798  ************************************
00:07:48.798   04:58:02 unittest -- unit/unittest.sh@240 -- # [[ y == y ]]
00:07:48.798   04:58:02 unittest -- unit/unittest.sh@241 -- # run_test unittest_nvme_cuse /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:48.798   04:58:02 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:48.798   04:58:02 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:48.798   04:58:02 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:48.798  ************************************
00:07:48.798  START TEST unittest_nvme_cuse
00:07:48.798  ************************************
00:07:48.798   04:58:02 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_cuse.c/nvme_cuse_ut
00:07:48.798  
00:07:48.798  
00:07:48.798       CUnit - A unit testing framework for C - Version 2.1-3
00:07:48.798       http://cunit.sourceforge.net/
00:07:48.798  
00:07:48.798  
00:07:48.798  Suite: nvme_cuse
00:07:48.798    Test: test_cuse_nvme_submit_io_read_write ...passed
00:07:48.798    Test: test_cuse_nvme_submit_io_read_write_with_md ...passed
00:07:48.798    Test: test_cuse_nvme_submit_passthru_cmd ...passed
00:07:48.798    Test: test_cuse_nvme_submit_passthru_cmd_with_md ...passed
00:07:48.798    Test: test_nvme_cuse_get_cuse_ns_device ...passed
00:07:48.798    Test: test_cuse_nvme_submit_io ...[2024-11-20 04:58:02.665148] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 667:cuse_nvme_submit_io: *ERROR*: SUBMIT_IO: opc:0 not valid
00:07:48.798  passed
00:07:48.798    Test: test_cuse_nvme_reset ...[2024-11-20 04:58:02.665522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_cuse.c: 352:cuse_nvme_reset: *ERROR*: Namespace reset not supported
00:07:48.798  passed
00:07:49.368    Test: test_nvme_cuse_stop ...passed
00:07:49.368    Test: test_spdk_nvme_cuse_get_ctrlr_name ...passed
00:07:49.368  
00:07:49.368  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.368                suites      1      1    n/a      0        0
00:07:49.368                 tests      9      9      9      0        0
00:07:49.368               asserts    118    118    118      0      n/a
00:07:49.368  
00:07:49.368  Elapsed time =    0.503 seconds
00:07:49.368  
00:07:49.368  real	0m0.539s
00:07:49.368  user	0m0.297s
00:07:49.368  sys	0m0.242s
00:07:49.368   04:58:03 unittest.unittest_nvme_cuse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.368   04:58:03 unittest.unittest_nvme_cuse -- common/autotest_common.sh@10 -- # set +x
00:07:49.368  ************************************
00:07:49.368  END TEST unittest_nvme_cuse
00:07:49.368  ************************************
00:07:49.368   04:58:03 unittest -- unit/unittest.sh@244 -- # run_test unittest_nvmf unittest_nvmf
00:07:49.368   04:58:03 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.368   04:58:03 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.368   04:58:03 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:49.368  ************************************
00:07:49.368  START TEST unittest_nvmf
00:07:49.368  ************************************
00:07:49.368   04:58:03 unittest.unittest_nvmf -- common/autotest_common.sh@1129 -- # unittest_nvmf
00:07:49.368   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:07:49.368  
00:07:49.368  
00:07:49.368       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.368       http://cunit.sourceforge.net/
00:07:49.368  
00:07:49.368  
00:07:49.368  Suite: nvmf
00:07:49.368    Test: test_get_log_page ...[2024-11-20 04:58:03.266685] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2655:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:07:49.368  passed
00:07:49.368    Test: test_process_fabrics_cmd ...[2024-11-20 04:58:03.268108] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT
00:07:49.368  passed
00:07:49.368    Test: test_connect ...[2024-11-20 04:58:03.269943] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1013:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:07:49.368  [2024-11-20 04:58:03.270117] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 876:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:07:49.368  [2024-11-20 04:58:03.270189] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1052:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:07:49.368  [2024-11-20 04:58:03.270573] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:07:49.368  [2024-11-20 04:58:03.270718] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 887:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:07:49.368  [2024-11-20 04:58:03.271222] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:07:49.368  [2024-11-20 04:58:03.271320] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:07:49.369  [2024-11-20 04:58:03.271768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 927:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:07:49.369  [2024-11-20 04:58:03.272000] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:07:49.369  [2024-11-20 04:58:03.272590] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 677:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:07:49.369  [2024-11-20 04:58:03.273308] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 683:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:07:49.369  [2024-11-20 04:58:03.273494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:07:49.369  [2024-11-20 04:58:03.273876] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:07:49.369  [2024-11-20 04:58:03.274347] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:07:49.369  [2024-11-20 04:58:03.274779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0)
00:07:49.369  [2024-11-20 04:58:03.275957] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group (nil))
00:07:49.369  [2024-11-20 04:58:03.276078] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 807:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group (nil))
00:07:49.369  passed
00:07:49.369    Test: test_get_ns_id_desc_list ...passed
00:07:49.369    Test: test_identify_ns ...[2024-11-20 04:58:03.276740] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:49.369  [2024-11-20 04:58:03.277263] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:07:49.369  [2024-11-20 04:58:03.277538] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:07:49.369  passed
00:07:49.369    Test: test_identify_ns_iocs_specific ...[2024-11-20 04:58:03.277974] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:49.369  [2024-11-20 04:58:03.278578] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:07:49.369  passed
00:07:49.369    Test: test_reservation_write_exclusive ...passed
00:07:49.369    Test: test_reservation_exclusive_access ...passed
00:07:49.369    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:07:49.369    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:07:49.369    Test: test_reservation_notification_log_page ...passed
00:07:49.369    Test: test_get_dif_ctx ...passed
00:07:49.369    Test: test_set_get_features ...[2024-11-20 04:58:03.280444] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:49.369  [2024-11-20 04:58:03.280896] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1649:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:07:49.369  [2024-11-20 04:58:03.280965] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1660:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:07:49.369  [2024-11-20 04:58:03.281244] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1736:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:07:49.369  passed
00:07:49.369    Test: test_identify_ctrlr ...passed
00:07:49.369    Test: test_identify_ctrlr_iocs_specific ...passed
00:07:49.369    Test: test_custom_admin_cmd ...passed
00:07:49.369    Test: test_fused_compare_and_write ...[2024-11-20 04:58:03.282359] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4368:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:07:49.369  [2024-11-20 04:58:03.282523] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4357:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:49.369  [2024-11-20 04:58:03.282960] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4375:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:07:49.369  passed
00:07:49.369    Test: test_multi_async_event_reqs ...passed
00:07:49.369    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:07:49.369    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:07:49.369    Test: test_multi_async_events ...passed
00:07:49.369    Test: test_rae ...passed
00:07:49.369    Test: test_nvmf_ctrlr_create_destruct ...passed
00:07:49.369    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:07:49.369    Test: test_spdk_nvmf_request_zcopy_start ...[2024-11-20 04:58:03.285472] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT
00:07:49.369  [2024-11-20 04:58:03.285659] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4
00:07:49.369  passed
00:07:49.369    Test: test_zcopy_read ...passed
00:07:49.369    Test: test_zcopy_write ...passed
00:07:49.369    Test: test_nvmf_property_set ...passed
00:07:49.369    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...[2024-11-20 04:58:03.286380] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:49.369  [2024-11-20 04:58:03.286708] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1947:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:07:49.369  passed
00:07:49.369    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...[2024-11-20 04:58:03.286803] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1971:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:07:49.369  [2024-11-20 04:58:03.286946] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1977:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:07:49.369  [2024-11-20 04:58:03.287335] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:49.369  [2024-11-20 04:58:03.287440] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1989:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:07:49.369  passed
00:07:49.369    Test: test_nvmf_ctrlr_ns_attachment ...passed
00:07:49.369    Test: test_nvmf_check_qpair_active ...[2024-11-20 04:58:03.287986] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4860:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT
00:07:49.369  [2024-11-20 04:58:03.288277] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4874:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication
00:07:49.369  [2024-11-20 04:58:03.288337] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0
00:07:49.369  [2024-11-20 04:58:03.288677] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4
00:07:49.369  [2024-11-20 04:58:03.288779] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4886:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5
00:07:49.369  passed
00:07:49.369  
00:07:49.369  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.369                suites      1      1    n/a      0        0
00:07:49.369                 tests     32     32     32      0        0
00:07:49.369               asserts    993    993    993      0      n/a
00:07:49.369  
00:07:49.369  Elapsed time =    0.023 seconds
00:07:49.369   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:07:49.369  
00:07:49.369  
00:07:49.369       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.369       http://cunit.sourceforge.net/
00:07:49.369  
00:07:49.369  
00:07:49.369  Suite: nvmf
00:07:49.369    Test: test_get_rw_params ...passed
00:07:49.369    Test: test_get_rw_ext_params ...passed
00:07:49.369    Test: test_lba_in_range ...passed
00:07:49.369    Test: test_get_dif_ctx ...passed
00:07:49.369    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:07:49.369    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-11-20 04:58:03.321117] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 499:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:07:49.369  [2024-11-20 04:58:03.321444] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:07:49.369  [2024-11-20 04:58:03.321547] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 514:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:07:49.369  passed
00:07:49.369    Test: test_nvmf_bdev_ctrlr_zcopy_start ...[2024-11-20 04:58:03.321617] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1018:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:07:49.369  [2024-11-20 04:58:03.321700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1025:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:07:49.369  passed
00:07:49.369    Test: test_nvmf_bdev_ctrlr_cmd ...[2024-11-20 04:58:03.321800] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 453:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:07:49.369  [2024-11-20 04:58:03.321841] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 460:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:07:49.369  [2024-11-20 04:58:03.321916] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 552:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:07:49.369  passed
00:07:49.369    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed
00:07:49.369    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed
00:07:49.369  
00:07:49.369  [2024-11-20 04:58:03.321957] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 559:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:07:49.369  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.369                suites      1      1    n/a      0        0
00:07:49.369                 tests     10     10     10      0        0
00:07:49.369               asserts    159    159    159      0      n/a
00:07:49.369  
00:07:49.369  Elapsed time =    0.001 seconds
00:07:49.630   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:07:49.630  
00:07:49.630  
00:07:49.630       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.630       http://cunit.sourceforge.net/
00:07:49.630  
00:07:49.630  
00:07:49.630  Suite: nvmf
00:07:49.630    Test: test_discovery_log ...passed
00:07:49.630    Test: test_discovery_log_with_filters ...passed
00:07:49.630  
00:07:49.630  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.630                suites      1      1    n/a      0        0
00:07:49.630                 tests      2      2      2      0        0
00:07:49.630               asserts    238    238    238      0      n/a
00:07:49.630  
00:07:49.630  Elapsed time =    0.002 seconds
00:07:49.630   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:07:49.630  
00:07:49.630  
00:07:49.630       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.630       http://cunit.sourceforge.net/
00:07:49.630  
00:07:49.630  
00:07:49.630  Suite: nvmf
00:07:49.630    Test: nvmf_test_create_subsystem ...[2024-11-20 04:58:03.395389] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 125:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:07:49.630  [2024-11-20 04:58:03.395660] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid
00:07:49.630  [2024-11-20 04:58:03.395813] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:07:49.630  [2024-11-20 04:58:03.395908] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid
00:07:49.630  [2024-11-20 04:58:03.395953] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:07:49.630  [2024-11-20 04:58:03.396000] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid
00:07:49.630  [2024-11-20 04:58:03.396077] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:07:49.630  [2024-11-20 04:58:03.396130] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid
00:07:49.630  [2024-11-20 04:58:03.396166] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 183:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:07:49.630  [2024-11-20 04:58:03.396206] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid
00:07:49.630  [2024-11-20 04:58:03.396247] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:07:49.630  [2024-11-20 04:58:03.396288] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid
00:07:49.630  [2024-11-20 04:58:03.396395] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:07:49.630  [2024-11-20 04:58:03.396488] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid
00:07:49.630  [2024-11-20 04:58:03.396586] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:07:49.630  [2024-11-20 04:58:03.396632] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid
00:07:49.630  [2024-11-20 04:58:03.396722] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:07:49.630  [2024-11-20 04:58:03.396763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid
00:07:49.630  [2024-11-20 04:58:03.396809] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:49.630  passed
00:07:49.630    Test: test_spdk_nvmf_subsystem_add_ns ...[2024-11-20 04:58:03.396862] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:49.630  [2024-11-20 04:58:03.396903] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:07:49.630  [2024-11-20 04:58:03.396938] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid
00:07:49.630  [2024-11-20 04:58:03.397117] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:07:49.630  passed
00:07:49.630    Test: test_spdk_nvmf_subsystem_add_fdp_ns ...[2024-11-20 04:58:03.397176] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2096:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:07:49.630  [2024-11-20 04:58:03.397456] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2230:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace.
00:07:49.630  passed
00:07:49.630    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:07:49.630    Test: test_spdk_nvmf_ns_visible ...[2024-11-20 04:58:03.397691] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11
00:07:49.630  passed
00:07:49.630    Test: test_reservation_register ...[2024-11-20 04:58:03.398098] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.630  [2024-11-20 04:58:03.398212] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3277:nvmf_ns_reservation_register: *ERROR*: No registrant
00:07:49.630  passed
00:07:49.630    Test: test_reservation_register_with_ptpl ...passed
00:07:49.630    Test: test_reservation_acquire_preempt_1 ...[2024-11-20 04:58:03.399131] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.630  passed
00:07:49.630    Test: test_reservation_acquire_release_with_ptpl ...passed
00:07:49.630    Test: test_reservation_release ...[2024-11-20 04:58:03.400758] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.630  passed
00:07:49.630    Test: test_reservation_unregister_notification ...[2024-11-20 04:58:03.401025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.630  passed
00:07:49.630    Test: test_reservation_release_notification ...[2024-11-20 04:58:03.401216] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.630  passed
00:07:49.630    Test: test_reservation_release_notification_write_exclusive ...[2024-11-20 04:58:03.401459] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.631  passed
00:07:49.631    Test: test_reservation_clear_notification ...[2024-11-20 04:58:03.401680] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.631  passed
00:07:49.631    Test: test_reservation_preempt_notification ...[2024-11-20 04:58:03.401915] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3219:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:07:49.631  passed
00:07:49.631    Test: test_spdk_nvmf_ns_event ...passed
00:07:49.631    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:07:49.631    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:07:49.631    Test: test_spdk_nvmf_subsystem_add_host ...[2024-11-20 04:58:03.402673] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 264:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_ns_reservation_report ...[2024-11-20 04:58:03.402763] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport
00:07:49.631  [2024-11-20 04:58:03.402893] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3582:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_nqn_is_valid ...[2024-11-20 04:58:03.402980] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:07:49.631  [2024-11-20 04:58:03.403038] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:db5c0741-8e58-4eff-8c0f-e2224f7e60d": uuid is not the correct length
00:07:49.631  [2024-11-20 04:58:03.403083] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_ns_reservation_restore ...passed
00:07:49.631    Test: test_nvmf_subsystem_state_change ...[2024-11-20 04:58:03.403181] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2776:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_reservation_custom_ops ...passed
00:07:49.631  
00:07:49.631  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.631                suites      1      1    n/a      0        0
00:07:49.631                 tests     24     24     24      0        0
00:07:49.631               asserts    499    499    499      0      n/a
00:07:49.631  
00:07:49.631  Elapsed time =    0.009 seconds
00:07:49.631   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:07:49.631  
00:07:49.631  
00:07:49.631       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.631       http://cunit.sourceforge.net/
00:07:49.631  
00:07:49.631  
00:07:49.631  Suite: nvmf
00:07:49.631    Test: test_nvmf_tcp_create ...[2024-11-20 04:58:03.463989] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 811:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_destroy ...passed
00:07:49.631    Test: test_nvmf_tcp_poll_group_create ...passed
00:07:49.631    Test: test_nvmf_tcp_send_c2h_data ...passed
00:07:49.631    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:07:49.631    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:07:49.631    Test: test_nvmf_tcp_qpair_init_mem_resource ...[2024-11-20 04:58:03.536086] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed5a0 is same with the state(5) to be set
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_send_c2h_term_req ...[2024-11-20 04:58:03.561952] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562019] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_send_capsule_resp_pdu ...[2024-11-20 04:58:03.562073] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.562114] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562158] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_icreq_handle ...[2024-11-20 04:58:03.562246] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:49.631  [2024-11-20 04:58:03.562367] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562407] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.562454] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2288:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:07:49.631  [2024-11-20 04:58:03.562492] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.562535] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562571] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.562635] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562673] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_check_xfer_type ...passed
00:07:49.631    Test: test_nvmf_tcp_invalid_sgl ...[2024-11-20 04:58:03.562754] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2697:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:07:49.631  [2024-11-20 04:58:03.562799] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.562843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7becc20 is same with the state(6) to be set
00:07:49.631  passed
00:07:49.631    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-11-20 04:58:03.562894] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2415:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x7ffff7bed990
00:07:49.631  [2024-11-20 04:58:03.563006] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563059] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563103] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2472:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x7ffff7bed0e0
00:07:49.631  [2024-11-20 04:58:03.563197] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563237] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563280] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2425:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:07:49.631  [2024-11-20 04:58:03.563347] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563412] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563458] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2464:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:07:49.631  [2024-11-20 04:58:03.563501] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563537] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563585] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563648] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563694] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563753] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563805] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563843] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563882] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.563939] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.563998] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.564055] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  [2024-11-20 04:58:03.564100] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1218:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=2
00:07:49.631  [2024-11-20 04:58:03.564144] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ffff7bed0e0 is same with the state(6) to be set
00:07:49.631  passed
00:07:49.890    Test: test_nvmf_tcp_tls_add_remove_credentials ...passed
00:07:49.890    Test: test_nvmf_tcp_tls_generate_psk_id ...[2024-11-20 04:58:03.586436] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 584:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:07:49.890  [2024-11-20 04:58:03.586502] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 595:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:07:49.890  passed
00:07:49.890    Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-11-20 04:58:03.586916] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 651:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:07:49.890  passed
00:07:49.890    Test: test_nvmf_tcp_tls_generate_tls_psk ...[2024-11-20 04:58:03.586990] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 656:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key!
00:07:49.890  [2024-11-20 04:58:03.587240] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 725:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:07:49.890  passed
00:07:49.890  
00:07:49.890  [2024-11-20 04:58:03.587305] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 749:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:07:49.890  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.890                suites      1      1    n/a      0        0
00:07:49.890                 tests     17     17     17      0        0
00:07:49.890               asserts    215    215    215      0      n/a
00:07:49.890  
00:07:49.890  Elapsed time =    0.146 seconds
00:07:49.890   04:58:03 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:07:49.890  
00:07:49.890  
00:07:49.890       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.890       http://cunit.sourceforge.net/
00:07:49.890  
00:07:49.890  
00:07:49.890  Suite: nvmf
00:07:49.890    Test: test_nvmf_tgt_create_poll_group ...passed
00:07:49.890  
00:07:49.890  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.890                suites      1      1    n/a      0        0
00:07:49.890                 tests      1      1      1      0        0
00:07:49.890               asserts     17     17     17      0      n/a
00:07:49.890  
00:07:49.890  Elapsed time =    0.022 seconds
00:07:49.890  
00:07:49.890  real	0m0.510s
00:07:49.890  user	0m0.249s
00:07:49.890  sys	0m0.262s
00:07:49.890   04:58:03 unittest.unittest_nvmf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.891   04:58:03 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x
00:07:49.891  ************************************
00:07:49.891  END TEST unittest_nvmf
00:07:49.891  ************************************
00:07:49.891   04:58:03 unittest -- unit/unittest.sh@245 -- # [[ n == y ]]
00:07:49.891   04:58:03 unittest -- unit/unittest.sh@250 -- # [[ n == y ]]
00:07:49.891   04:58:03 unittest -- unit/unittest.sh@254 -- # run_test unittest_scsi unittest_scsi
00:07:49.891   04:58:03 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.891   04:58:03 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.891   04:58:03 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:49.891  ************************************
00:07:49.891  START TEST unittest_scsi
00:07:49.891  ************************************
00:07:49.891   04:58:03 unittest.unittest_scsi -- common/autotest_common.sh@1129 -- # unittest_scsi
00:07:49.891   04:58:03 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:07:49.891  
00:07:49.891  
00:07:49.891       CUnit - A unit testing framework for C - Version 2.1-3
00:07:49.891       http://cunit.sourceforge.net/
00:07:49.891  
00:07:49.891  
00:07:49.891  Suite: dev_suite
00:07:49.891    Test: dev_destruct_null_dev ...passed
00:07:49.891    Test: dev_destruct_zero_luns ...passed
00:07:49.891    Test: dev_destruct_null_lun ...passed
00:07:49.891    Test: dev_destruct_success ...passed
00:07:49.891    Test: dev_construct_num_luns_zero ...[2024-11-20 04:58:03.816092] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:07:49.891  passed
00:07:49.891    Test: dev_construct_no_lun_zero ...[2024-11-20 04:58:03.816762] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:07:49.891  passed
00:07:49.891    Test: dev_construct_null_lun ...passed
00:07:49.891    Test: dev_construct_name_too_long ...[2024-11-20 04:58:03.816884] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 247:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:07:49.891  [2024-11-20 04:58:03.816935] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 222:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:07:49.891  passed
00:07:49.891    Test: dev_construct_success ...passed
00:07:49.891    Test: dev_construct_success_lun_zero_not_first ...passed
00:07:49.891    Test: dev_queue_mgmt_task_success ...passed
00:07:49.891    Test: dev_queue_task_success ...passed
00:07:49.891    Test: dev_stop_success ...passed
00:07:49.891    Test: dev_add_port_max_ports ...[2024-11-20 04:58:03.818043] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:07:49.891  passed
00:07:49.891    Test: dev_add_port_construct_failure1 ...[2024-11-20 04:58:03.818386] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:07:49.891  passed
00:07:49.891    Test: dev_add_port_construct_failure2 ...[2024-11-20 04:58:03.818609] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:07:49.891  passed
00:07:49.891    Test: dev_add_port_success1 ...passed
00:07:49.891    Test: dev_add_port_success2 ...passed
00:07:49.891    Test: dev_add_port_success3 ...passed
00:07:49.891    Test: dev_find_port_by_id_num_ports_zero ...passed
00:07:49.891    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:07:49.891    Test: dev_find_port_by_id_success ...passed
00:07:49.891    Test: dev_add_lun_bdev_not_found ...passed
00:07:49.891    Test: dev_add_lun_no_free_lun_id ...[2024-11-20 04:58:03.819618] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:07:49.891  passed
00:07:49.891    Test: dev_add_lun_success1 ...passed
00:07:49.891    Test: dev_add_lun_success2 ...passed
00:07:49.891    Test: dev_check_pending_tasks ...passed
00:07:49.891    Test: dev_iterate_luns ...passed
00:07:49.891    Test: dev_find_free_lun ...passed
00:07:49.891  
00:07:49.891  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:49.891                suites      1      1    n/a      0        0
00:07:49.891                 tests     29     29     29      0        0
00:07:49.891               asserts     97     97     97      0      n/a
00:07:49.891  
00:07:49.891  Elapsed time =    0.005 seconds
00:07:49.891   04:58:03 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:07:50.151  
00:07:50.151  
00:07:50.151       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.151       http://cunit.sourceforge.net/
00:07:50.151  
00:07:50.151  
00:07:50.151  Suite: lun_suite
00:07:50.151    Test: lun_task_mgmt_execute_abort_task_not_supported ...[2024-11-20 04:58:03.854073] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:07:50.151  passed
00:07:50.151    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-11-20 04:58:03.854422] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:07:50.151  passed
00:07:50.151    Test: lun_task_mgmt_execute_lun_reset ...passed
00:07:50.151    Test: lun_task_mgmt_execute_target_reset ...passed
00:07:50.151    Test: lun_task_mgmt_execute_invalid_case ...[2024-11-20 04:58:03.854591] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:07:50.151  passed
00:07:50.151    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:07:50.151    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:07:50.151    Test: lun_append_task_null_lun_not_supported ...passed
00:07:50.151    Test: lun_execute_scsi_task_pending ...passed
00:07:50.151    Test: lun_execute_scsi_task_complete ...passed
00:07:50.151    Test: lun_execute_scsi_task_resize ...passed
00:07:50.151    Test: lun_destruct_success ...passed
00:07:50.151    Test: lun_construct_null_ctx ...[2024-11-20 04:58:03.854831] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:07:50.151  passed
00:07:50.151    Test: lun_construct_success ...passed
00:07:50.151    Test: lun_reset_task_wait_scsi_task_complete ...passed
00:07:50.151    Test: lun_reset_task_suspend_scsi_task ...passed
00:07:50.151    Test: lun_check_pending_tasks_only_for_specific_initiator ...passed
00:07:50.151    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:07:50.151  
00:07:50.151  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.151                suites      1      1    n/a      0        0
00:07:50.151                 tests     18     18     18      0        0
00:07:50.151               asserts    153    153    153      0      n/a
00:07:50.151  
00:07:50.151  Elapsed time =    0.001 seconds
00:07:50.151   04:58:03 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:07:50.151  
00:07:50.151  
00:07:50.151       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.151       http://cunit.sourceforge.net/
00:07:50.151  
00:07:50.151  
00:07:50.151  Suite: scsi_suite
00:07:50.151    Test: scsi_init ...passed
00:07:50.151  
00:07:50.151  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.151                suites      1      1    n/a      0        0
00:07:50.151                 tests      1      1      1      0        0
00:07:50.151               asserts      1      1      1      0      n/a
00:07:50.151  
00:07:50.151  Elapsed time =    0.000 seconds
00:07:50.151   04:58:03 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:07:50.151  
00:07:50.151  
00:07:50.151       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.151       http://cunit.sourceforge.net/
00:07:50.151  
00:07:50.151  
00:07:50.151  Suite: translation_suite
00:07:50.151    Test: mode_select_6_test ...passed
00:07:50.151    Test: mode_select_6_test2 ...passed
00:07:50.151    Test: mode_sense_6_test ...passed
00:07:50.151    Test: mode_sense_10_test ...passed
00:07:50.151    Test: inquiry_evpd_test ...passed
00:07:50.151    Test: inquiry_standard_test ...passed
00:07:50.151    Test: inquiry_overflow_test ...passed
00:07:50.151    Test: task_complete_test ...passed
00:07:50.151    Test: lba_range_test ...passed
00:07:50.151    Test: xfer_len_test ...[2024-11-20 04:58:03.911636] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1270:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:07:50.151  passed
00:07:50.151    Test: xfer_test ...passed
00:07:50.151    Test: scsi_name_padding_test ...passed
00:07:50.151    Test: get_dif_ctx_test ...passed
00:07:50.151    Test: unmap_split_test ...passed
00:07:50.151  
00:07:50.151  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.151                suites      1      1    n/a      0        0
00:07:50.151                 tests     14     14     14      0        0
00:07:50.151               asserts   1205   1205   1205      0      n/a
00:07:50.151  
00:07:50.151  Elapsed time =    0.005 seconds
00:07:50.151   04:58:03 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:07:50.151  
00:07:50.151  
00:07:50.151       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.151       http://cunit.sourceforge.net/
00:07:50.151  
00:07:50.151  
00:07:50.151  Suite: reservation_suite
00:07:50.151    Test: test_reservation_register ...[2024-11-20 04:58:03.946285] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  passed
00:07:50.151    Test: test_reservation_reserve ...[2024-11-20 04:58:03.946599] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  [2024-11-20 04:58:03.946667] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:07:50.151  passed
00:07:50.151    Test: test_all_registrant_reservation_reserve ...[2024-11-20 04:58:03.946754] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:07:50.151  [2024-11-20 04:58:03.946818] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  passed
00:07:50.151    Test: test_all_registrant_reservation_access ...[2024-11-20 04:58:03.946924] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  [2024-11-20 04:58:03.946987] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0x8
00:07:50.151  [2024-11-20 04:58:03.947042] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 865:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0xaa
00:07:50.151  passed
00:07:50.151    Test: test_reservation_preempt_non_all_regs ...[2024-11-20 04:58:03.947109] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  [2024-11-20 04:58:03.947170] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:07:50.151  passed
00:07:50.151    Test: test_reservation_preempt_all_regs ...[2024-11-20 04:58:03.947281] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  passed
00:07:50.151    Test: test_reservation_cmds_conflict ...[2024-11-20 04:58:03.947410] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  [2024-11-20 04:58:03.947478] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 857:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:07:50.151  [2024-11-20 04:58:03.947544] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:50.151  [2024-11-20 04:58:03.947582] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:50.151  passed
00:07:50.151    Test: test_scsi2_reserve_release ...passed
00:07:50.151    Test: test_pr_with_scsi2_reserve_release ...[2024-11-20 04:58:03.947623] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:07:50.151  [2024-11-20 04:58:03.947656] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 851:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:07:50.151  passed
00:07:50.151  
00:07:50.151  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.151                suites      1      1    n/a      0        0
00:07:50.151                 tests      9      9      9      0        0
00:07:50.151               asserts    344    344    344      0      n/a
00:07:50.151  
00:07:50.151  Elapsed time =    0.002 seconds
00:07:50.151  [2024-11-20 04:58:03.947744] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 278:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:07:50.151  
00:07:50.151  real	0m0.160s
00:07:50.151  user	0m0.101s
00:07:50.151  sys	0m0.060s
00:07:50.151   04:58:03 unittest.unittest_scsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:50.152   04:58:03 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x
00:07:50.152  ************************************
00:07:50.152  END TEST unittest_scsi
00:07:50.152  ************************************
00:07:50.152    04:58:03 unittest -- unit/unittest.sh@255 -- # uname -s
00:07:50.152   04:58:03 unittest -- unit/unittest.sh@255 -- # '[' Linux = Linux ']'
00:07:50.152   04:58:03 unittest -- unit/unittest.sh@258 -- # run_test unittest_sock unittest_sock
00:07:50.152   04:58:03 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.152   04:58:03 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.152   04:58:04 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:50.152  ************************************
00:07:50.152  START TEST unittest_sock
00:07:50.152  ************************************
00:07:50.152   04:58:04 unittest.unittest_sock -- common/autotest_common.sh@1129 -- # unittest_sock
00:07:50.152   04:58:04 unittest.unittest_sock -- unit/unittest.sh@125 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/sock.c/sock_ut
00:07:50.152  
00:07:50.152  
00:07:50.152       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.152       http://cunit.sourceforge.net/
00:07:50.152  
00:07:50.152  
00:07:50.152  Suite: sock
00:07:50.152    Test: posix_sock ...passed
00:07:50.152    Test: ut_sock ...passed
00:07:50.152    Test: posix_sock_group ...passed
00:07:50.152    Test: ut_sock_group ...passed
00:07:50.152    Test: posix_sock_group_fairness ...passed
00:07:50.152    Test: _posix_sock_close ...passed
00:07:50.152    Test: sock_get_default_opts ...passed
00:07:50.152    Test: ut_sock_impl_get_set_opts ...passed
00:07:50.152    Test: posix_sock_impl_get_set_opts ...passed
00:07:50.152    Test: ut_sock_map ...passed
00:07:50.152    Test: override_impl_opts ...passed
00:07:50.152    Test: ut_sock_group_get_ctx ...passed
00:07:50.152    Test: posix_get_interface_name ...passed
00:07:50.152  
00:07:50.152  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.152                suites      1      1    n/a      0        0
00:07:50.152                 tests     13     13     13      0        0
00:07:50.152               asserts    360    360    360      0      n/a
00:07:50.152  
00:07:50.152  Elapsed time =    0.010 seconds
00:07:50.152   04:58:04 unittest.unittest_sock -- unit/unittest.sh@126 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/sock/posix.c/posix_ut
00:07:50.152  
00:07:50.152  
00:07:50.152       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.152       http://cunit.sourceforge.net/
00:07:50.152  
00:07:50.152  
00:07:50.152  Suite: posix
00:07:50.152    Test: flush ...passed
00:07:50.152  
00:07:50.152  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.152                suites      1      1    n/a      0        0
00:07:50.152                 tests      1      1      1      0        0
00:07:50.152               asserts     28     28     28      0      n/a
00:07:50.152  
00:07:50.152  Elapsed time =    0.000 seconds
00:07:50.411   04:58:04 unittest.unittest_sock -- unit/unittest.sh@128 -- # [[ n == y ]]
00:07:50.411  
00:07:50.411  real	0m0.101s
00:07:50.411  user	0m0.026s
00:07:50.411  sys	0m0.051s
00:07:50.411   04:58:04 unittest.unittest_sock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:50.411   04:58:04 unittest.unittest_sock -- common/autotest_common.sh@10 -- # set +x
00:07:50.411  ************************************
00:07:50.411  END TEST unittest_sock
00:07:50.411  ************************************
00:07:50.411   04:58:04 unittest -- unit/unittest.sh@260 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:50.411   04:58:04 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.411   04:58:04 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.411   04:58:04 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:50.411  ************************************
00:07:50.411  START TEST unittest_thread
00:07:50.411  ************************************
00:07:50.411   04:58:04 unittest.unittest_thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:07:50.411  
00:07:50.411  
00:07:50.411       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.411       http://cunit.sourceforge.net/
00:07:50.411  
00:07:50.411  
00:07:50.411  Suite: io_channel
00:07:50.411    Test: thread_alloc ...passed
00:07:50.411    Test: thread_send_msg ...passed
00:07:50.411    Test: thread_poller ...passed
00:07:50.411    Test: poller_pause ...passed
00:07:50.411    Test: thread_for_each ...passed
00:07:50.411    Test: for_each_channel_remove ...passed
00:07:50.411    Test: for_each_channel_unreg ...[2024-11-20 04:58:04.211025] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2193:spdk_io_device_register: *ERROR*: io_device 0x7ffef8ab4c80 already registered (old:0x613000000200 new:0x6130000003c0)
00:07:50.411  passed
00:07:50.411    Test: thread_name ...passed
00:07:50.411    Test: channel ...[2024-11-20 04:58:04.215071] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2327:spdk_get_io_channel: *ERROR*: could not find io_device 0x55938615c380
00:07:50.411  passed
00:07:50.411    Test: channel_destroy_races ...passed
00:07:50.411    Test: thread_exit_test ...[2024-11-20 04:58:04.220186] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 654:thread_exit: *ERROR*: thread 0x619000007380 got timeout, and move it to the exited state forcefully
00:07:50.411  passed
00:07:50.411    Test: thread_update_stats_test ...passed
00:07:50.411    Test: nested_channel ...passed
00:07:50.411    Test: device_unregister_and_thread_exit_race ...passed
00:07:50.411    Test: cache_closest_timed_poller ...passed
00:07:50.411    Test: multi_timed_pollers_have_same_expiration ...passed
00:07:50.411    Test: io_device_lookup ...passed
00:07:50.411    Test: spdk_spin ...[2024-11-20 04:58:04.231055] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3111:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:50.411  [2024-11-20 04:58:04.231114] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x7ffef8ab4c70
00:07:50.411  [2024-11-20 04:58:04.231222] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3149:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:07:50.411  [2024-11-20 04:58:04.232922] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:07:50.411  [2024-11-20 04:58:04.233005] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x7ffef8ab4c70
00:07:50.411  [2024-11-20 04:58:04.233046] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:50.411  [2024-11-20 04:58:04.233090] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x7ffef8ab4c70
00:07:50.411  [2024-11-20 04:58:04.233138] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3132:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:07:50.411  [2024-11-20 04:58:04.233178] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x7ffef8ab4c70
00:07:50.411  [2024-11-20 04:58:04.233227] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3093:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:07:50.411  [2024-11-20 04:58:04.233294] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x7ffef8ab4c70
00:07:50.412  passed
00:07:50.412    Test: for_each_channel_and_thread_exit_race ...passed
00:07:50.412    Test: for_each_thread_and_thread_exit_race ...passed
00:07:50.412    Test: poller_get_name ...passed
00:07:50.412    Test: poller_get_id ...passed
00:07:50.412    Test: poller_get_state_str ...passed
00:07:50.412    Test: poller_get_period_ticks ...passed
00:07:50.412    Test: poller_get_stats ...passed
00:07:50.412  
00:07:50.412  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.412                suites      1      1    n/a      0        0
00:07:50.412                 tests     25     25     25      0        0
00:07:50.412               asserts    429    429    429      0      n/a
00:07:50.412  
00:07:50.412  Elapsed time =    0.056 seconds
00:07:50.412  
00:07:50.412  real	0m0.096s
00:07:50.412  user	0m0.080s
00:07:50.412  sys	0m0.016s
00:07:50.412   04:58:04 unittest.unittest_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:50.412   04:58:04 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x
00:07:50.412  ************************************
00:07:50.412  END TEST unittest_thread
00:07:50.412  ************************************
00:07:50.412   04:58:04 unittest -- unit/unittest.sh@261 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:50.412   04:58:04 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.412   04:58:04 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.412   04:58:04 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:50.412  ************************************
00:07:50.412  START TEST unittest_iobuf
00:07:50.412  ************************************
00:07:50.412   04:58:04 unittest.unittest_iobuf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:07:50.412  
00:07:50.412  
00:07:50.412       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.412       http://cunit.sourceforge.net/
00:07:50.412  
00:07:50.412  
00:07:50.412  Suite: io_channel
00:07:50.412    Test: iobuf ...passed
00:07:50.412    Test: iobuf_cache ...[2024-11-20 04:58:04.350896] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:50.412  [2024-11-20 04:58:04.351215] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:50.412  [2024-11-20 04:58:04.351357] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 427:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:07:50.412  [2024-11-20 04:58:04.351429] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 430:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:50.412  [2024-11-20 04:58:04.351515] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 415:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:07:50.412  [2024-11-20 04:58:04.351556] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 418:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:07:50.412  passed
00:07:50.412    Test: iobuf_priority ...passed
00:07:50.412  
00:07:50.412  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.412                suites      1      1    n/a      0        0
00:07:50.412                 tests      3      3      3      0        0
00:07:50.412               asserts    127    127    127      0      n/a
00:07:50.412  
00:07:50.412  Elapsed time =    0.007 seconds
00:07:50.670  
00:07:50.670  real	0m0.046s
00:07:50.670  user	0m0.033s
00:07:50.670  sys	0m0.013s
00:07:50.670   04:58:04 unittest.unittest_iobuf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:50.670   04:58:04 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x
00:07:50.670  ************************************
00:07:50.670  END TEST unittest_iobuf
00:07:50.670  ************************************
00:07:50.670   04:58:04 unittest -- unit/unittest.sh@262 -- # run_test unittest_util unittest_util
00:07:50.670   04:58:04 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:50.670   04:58:04 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:50.670   04:58:04 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:50.670  ************************************
00:07:50.670  START TEST unittest_util
00:07:50.670  ************************************
00:07:50.670   04:58:04 unittest.unittest_util -- common/autotest_common.sh@1129 -- # unittest_util
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: base64
00:07:50.670    Test: test_base64_get_encoded_strlen ...passed
00:07:50.670    Test: test_base64_get_decoded_len ...passed
00:07:50.670    Test: test_base64_encode ...passed
00:07:50.670    Test: test_base64_decode ...passed
00:07:50.670    Test: test_base64_urlsafe_encode ...passed
00:07:50.670    Test: test_base64_urlsafe_decode ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      6      6      6      0        0
00:07:50.670               asserts    112    112    112      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.000 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: bit_array
00:07:50.670    Test: test_1bit ...passed
00:07:50.670    Test: test_64bit ...passed
00:07:50.670    Test: test_find ...passed
00:07:50.670    Test: test_resize ...passed
00:07:50.670    Test: test_errors ...passed
00:07:50.670    Test: test_count ...passed
00:07:50.670    Test: test_mask_store_load ...passed
00:07:50.670    Test: test_mask_clear ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      8      8      8      0        0
00:07:50.670               asserts   5075   5075   5075      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.002 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: cpuset
00:07:50.670    Test: test_cpuset ...passed
00:07:50.670    Test: test_cpuset_parse ...[2024-11-20 04:58:04.500430] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '['
00:07:50.670  [2024-11-20 04:58:04.500719] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:07:50.670  [2024-11-20 04:58:04.500818] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:07:50.670  [2024-11-20 04:58:04.500900] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 236:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:07:50.670  [2024-11-20 04:58:04.500954] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:07:50.670  [2024-11-20 04:58:04.500997] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:07:50.670  [2024-11-20 04:58:04.501035] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:07:50.670  [2024-11-20 04:58:04.501090] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:07:50.670  passed
00:07:50.670    Test: test_cpuset_fmt ...passed
00:07:50.670    Test: test_cpuset_foreach ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      4      4      4      0        0
00:07:50.670               asserts     90     90     90      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.002 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: crc16
00:07:50.670    Test: test_crc16_t10dif ...passed
00:07:50.670    Test: test_crc16_t10dif_seed ...passed
00:07:50.670    Test: test_crc16_t10dif_copy ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      3      3      3      0        0
00:07:50.670               asserts      5      5      5      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.000 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: crc32_ieee
00:07:50.670    Test: test_crc32_ieee ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      1      1      1      0        0
00:07:50.670               asserts      1      1      1      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.000 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: crc32c
00:07:50.670    Test: test_crc32c ...passed
00:07:50.670    Test: test_crc32c_nvme ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      2      2      2      0        0
00:07:50.670               asserts     16     16     16      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.001 seconds
00:07:50.670   04:58:04 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:07:50.670  
00:07:50.670  
00:07:50.670       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.670       http://cunit.sourceforge.net/
00:07:50.670  
00:07:50.670  
00:07:50.670  Suite: crc64
00:07:50.670    Test: test_crc64_nvme ...passed
00:07:50.670  
00:07:50.670  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.670                suites      1      1    n/a      0        0
00:07:50.670                 tests      1      1      1      0        0
00:07:50.670               asserts      4      4      4      0      n/a
00:07:50.670  
00:07:50.670  Elapsed time =    0.001 seconds
00:07:50.929   04:58:04 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:07:50.929  
00:07:50.929  
00:07:50.929       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.929       http://cunit.sourceforge.net/
00:07:50.929  
00:07:50.929  
00:07:50.929  Suite: string
00:07:50.929    Test: test_parse_ip_addr ...passed
00:07:50.929    Test: test_str_chomp ...passed
00:07:50.929    Test: test_parse_capacity ...passed
00:07:50.929    Test: test_sprintf_append_realloc ...passed
00:07:50.929    Test: test_strtol ...passed
00:07:50.929    Test: test_strtoll ...passed
00:07:50.929    Test: test_strarray ...passed
00:07:50.929    Test: test_strcpy_replace ...passed
00:07:50.929  
00:07:50.929  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:50.929                suites      1      1    n/a      0        0
00:07:50.929                 tests      8      8      8      0        0
00:07:50.929               asserts    161    161    161      0      n/a
00:07:50.929  
00:07:50.929  Elapsed time =    0.001 seconds
00:07:50.929   04:58:04 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:07:50.929  
00:07:50.929  
00:07:50.929       CUnit - A unit testing framework for C - Version 2.1-3
00:07:50.929       http://cunit.sourceforge.net/
00:07:50.929  
00:07:50.929  
00:07:50.929  Suite: dif
00:07:50.929    Test: dif_generate_and_verify_test ...[2024-11-20 04:58:04.669883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:50.929  [2024-11-20 04:58:04.670408] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:50.929  [2024-11-20 04:58:04.670740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:07:50.929  [2024-11-20 04:58:04.671104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:50.929  [2024-11-20 04:58:04.671513] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:50.929  [2024-11-20 04:58:04.671874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:07:50.929  passed
00:07:50.929    Test: dif_disable_check_test ...[2024-11-20 04:58:04.673006] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:50.929  [2024-11-20 04:58:04.673343] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:50.929  [2024-11-20 04:58:04.673661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:07:50.930  passed
00:07:50.930    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-11-20 04:58:04.674774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:07:50.930  [2024-11-20 04:58:04.675138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:07:50.930  [2024-11-20 04:58:04.675483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:07:50.930  [2024-11-20 04:58:04.675867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:07:50.930  [2024-11-20 04:58:04.676211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:50.930  [2024-11-20 04:58:04.676555] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:50.930  [2024-11-20 04:58:04.676878] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:50.930  [2024-11-20 04:58:04.677244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:07:50.930  [2024-11-20 04:58:04.677599] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:50.930  [2024-11-20 04:58:04.677977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:50.930  [2024-11-20 04:58:04.678352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:07:50.930  passed
00:07:50.930    Test: dif_apptag_mask_test ...[2024-11-20 04:58:04.678716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:50.930  [2024-11-20 04:58:04.679083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:07:50.930  passed
00:07:50.930    Test: dif_sec_8_md_8_error_test ...[2024-11-20 04:58:04.679309] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:50.930  passed
00:07:50.930    Test: dif_sec_512_md_0_error_test ...passed
00:07:50.930    Test: dif_sec_512_md_16_error_test ...[2024-11-20 04:58:04.679453] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:50.930  [2024-11-20 04:58:04.679518] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:50.930  [2024-11-20 04:58:04.679584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:50.930  passed
00:07:50.930    Test: dif_sec_4096_md_0_8_error_test ...[2024-11-20 04:58:04.679638] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:50.930  [2024-11-20 04:58:04.679692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:50.930  passed
00:07:50.930    Test: dif_sec_4100_md_128_error_test ...[2024-11-20 04:58:04.679737] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:50.930  [2024-11-20 04:58:04.679771] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:50.930  [2024-11-20 04:58:04.679833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:50.930  [2024-11-20 04:58:04.679897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:50.930  passed
00:07:50.930    Test: dif_guard_seed_test ...passed
00:07:50.930    Test: dif_guard_value_test ...passed
00:07:50.930    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:07:50.930    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:50.930    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 04:58:04.727495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd5c, Actual=fd4c
00:07:50.930  [2024-11-20 04:58:04.730124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fe31, Actual=fe21
00:07:50.930  [2024-11-20 04:58:04.732750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.735425] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.738044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.930  [2024-11-20 04:58:04.740746] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.930  [2024-11-20 04:58:04.743445] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd4c, Actual=8e12
00:07:50.930  [2024-11-20 04:58:04.744853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fe21, Actual=4615
00:07:50.930  [2024-11-20 04:58:04.746231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.930  [2024-11-20 04:58:04.748893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=38474660, Actual=38574660
00:07:50.930  [2024-11-20 04:58:04.751614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.754285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.756888] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:50.930  [2024-11-20 04:58:04.759611] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:50.930  [2024-11-20 04:58:04.762203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.930  [2024-11-20 04:58:04.763591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=38574660, Actual=3f37d053
00:07:50.930  [2024-11-20 04:58:04.765069] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.930  [2024-11-20 04:58:04.767702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.930  [2024-11-20 04:58:04.770382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.773008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.775713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.930  [2024-11-20 04:58:04.778325] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.930  [2024-11-20 04:58:04.780790] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.930  [2024-11-20 04:58:04.782126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.930  passed
00:07:50.930    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-11-20 04:58:04.782540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd5c, Actual=fd4c
00:07:50.930  [2024-11-20 04:58:04.782867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe31, Actual=fe21
00:07:50.930  [2024-11-20 04:58:04.783177] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.783504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.783808] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.930  [2024-11-20 04:58:04.784150] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.930  [2024-11-20 04:58:04.784461] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=8e12
00:07:50.930  [2024-11-20 04:58:04.784687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=4615
00:07:50.930  [2024-11-20 04:58:04.784908] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.930  [2024-11-20 04:58:04.785217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38474660, Actual=38574660
00:07:50.930  [2024-11-20 04:58:04.785528] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.785853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.930  [2024-11-20 04:58:04.786155] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.930  [2024-11-20 04:58:04.786473] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.930  [2024-11-20 04:58:04.786772] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.930  [2024-11-20 04:58:04.786997] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=3f37d053
00:07:50.931  [2024-11-20 04:58:04.787216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.931  [2024-11-20 04:58:04.787554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.931  [2024-11-20 04:58:04.787875] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.788176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.788487] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.788786] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.789100] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.931  [2024-11-20 04:58:04.789346] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.931  passed
00:07:50.931    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-11-20 04:58:04.789620] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd5c, Actual=fd4c
00:07:50.931  [2024-11-20 04:58:04.789929] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe31, Actual=fe21
00:07:50.931  [2024-11-20 04:58:04.790246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.790551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.790855] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.791159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.791483] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=8e12
00:07:50.931  [2024-11-20 04:58:04.791711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=4615
00:07:50.931  [2024-11-20 04:58:04.791935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.931  [2024-11-20 04:58:04.792240] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38474660, Actual=38574660
00:07:50.931  [2024-11-20 04:58:04.792548] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.792884] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.793191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.793521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.793835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.931  [2024-11-20 04:58:04.794062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=3f37d053
00:07:50.931  [2024-11-20 04:58:04.794299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.931  [2024-11-20 04:58:04.794625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.931  [2024-11-20 04:58:04.794934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.795242] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.795591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.795919] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.796225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.931  [2024-11-20 04:58:04.796474] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.931  passed
00:07:50.931    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-11-20 04:58:04.796735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd5c, Actual=fd4c
00:07:50.931  [2024-11-20 04:58:04.797042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe31, Actual=fe21
00:07:50.931  [2024-11-20 04:58:04.797359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.797677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.797987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.798311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.798621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=8e12
00:07:50.931  [2024-11-20 04:58:04.798842] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=4615
00:07:50.931  [2024-11-20 04:58:04.799071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.931  [2024-11-20 04:58:04.799389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38474660, Actual=38574660
00:07:50.931  [2024-11-20 04:58:04.799702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.800021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.800326] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.800630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.800935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.931  [2024-11-20 04:58:04.801142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=3f37d053
00:07:50.931  [2024-11-20 04:58:04.801382] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.931  [2024-11-20 04:58:04.801709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.931  [2024-11-20 04:58:04.802011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.802318] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.802608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.802910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.803217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.931  [2024-11-20 04:58:04.803477] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.931  passed
00:07:50.931    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-11-20 04:58:04.803739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd5c, Actual=fd4c
00:07:50.931  [2024-11-20 04:58:04.804053] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe31, Actual=fe21
00:07:50.931  [2024-11-20 04:58:04.804353] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.804654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.804966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.805308] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.931  [2024-11-20 04:58:04.805615] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=8e12
00:07:50.931  [2024-11-20 04:58:04.805845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=4615
00:07:50.931  passed
00:07:50.931    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-11-20 04:58:04.806087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.931  [2024-11-20 04:58:04.806397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38474660, Actual=38574660
00:07:50.931  [2024-11-20 04:58:04.806701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.807025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.931  [2024-11-20 04:58:04.807336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.807643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.931  [2024-11-20 04:58:04.807959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.931  [2024-11-20 04:58:04.808178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=3f37d053
00:07:50.931  [2024-11-20 04:58:04.808439] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.931  [2024-11-20 04:58:04.808777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.932  [2024-11-20 04:58:04.809084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.809397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.809717] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.810015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.810323] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.932  [2024-11-20 04:58:04.810562] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.932  passed
00:07:50.932    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-11-20 04:58:04.810824] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd5c, Actual=fd4c
00:07:50.932  [2024-11-20 04:58:04.811135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe31, Actual=fe21
00:07:50.932  [2024-11-20 04:58:04.811444] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.811752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.812058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.812378] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.812688] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=8e12
00:07:50.932  [2024-11-20 04:58:04.812915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=4615
00:07:50.932  passed
00:07:50.932    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-11-20 04:58:04.813168] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1aa753ed, Actual=1ab753ed
00:07:50.932  [2024-11-20 04:58:04.813503] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38474660, Actual=38574660
00:07:50.932  [2024-11-20 04:58:04.813812] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.814130] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.814440] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.932  [2024-11-20 04:58:04.814743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=10000000000058
00:07:50.932  [2024-11-20 04:58:04.815048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=51c8df1a
00:07:50.932  [2024-11-20 04:58:04.815268] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=3f37d053
00:07:50.932  [2024-11-20 04:58:04.815511] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:50.932  [2024-11-20 04:58:04.815851] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88110a2d4837a266, Actual=88010a2d4837a266
00:07:50.932  [2024-11-20 04:58:04.816172] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.816478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.816788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.817091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=100058
00:07:50.932  [2024-11-20 04:58:04.817404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:50.932  [2024-11-20 04:58:04.817655] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=30b0141cc3d8528e
00:07:50.932  passed
00:07:50.932    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_dif_disable_single_iov ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_prchk_7_multi_bounce_iovs_complex_splits ...passed
00:07:50.932    Test: dif_copy_sec_512_md_8_dif_disable_multi_bounce_iovs_complex_splits ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:50.932    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 04:58:04.874796] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd5c, Actual=fd4c
00:07:50.932  [2024-11-20 04:58:04.875996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=bea3, Actual=beb3
00:07:50.932  [2024-11-20 04:58:04.877146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.878272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:50.932  [2024-11-20 04:58:04.879403] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.932  [2024-11-20 04:58:04.880509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:50.932  [2024-11-20 04:58:04.881628] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd4c, Actual=8e12
00:07:50.932  [2024-11-20 04:58:04.882726] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=92c0, Actual=2af4
00:07:51.190  [2024-11-20 04:58:04.883874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1aa753ed, Actual=1ab753ed
00:07:51.190  [2024-11-20 04:58:04.884991] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=8e87cf09, Actual=8e97cf09
00:07:51.190  [2024-11-20 04:58:04.886120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.190  [2024-11-20 04:58:04.887225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.190  [2024-11-20 04:58:04.888363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.190  [2024-11-20 04:58:04.889484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.191  [2024-11-20 04:58:04.890640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1ab753ed, Actual=51c8df1a
00:07:51.191  [2024-11-20 04:58:04.891774] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=9bbd506d, Actual=9cddc65e
00:07:51.191  [2024-11-20 04:58:04.892897] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:51.191  [2024-11-20 04:58:04.894017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=df0427824c2d4eb8, Actual=df1427824c2d4eb8
00:07:51.191  [2024-11-20 04:58:04.895134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.896254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.897412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.898534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.899659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:51.191  passed
00:07:51.191    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-20 04:58:04.900770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=4828e06b4944356f, Actual=f099fe5ac2abc587
00:07:51.191  [2024-11-20 04:58:04.901118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd5c, Actual=fd4c
00:07:51.191  [2024-11-20 04:58:04.901433] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=bea3, Actual=beb3
00:07:51.191  [2024-11-20 04:58:04.901715] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.902000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.902295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.902577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.902872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd4c, Actual=8e12
00:07:51.191  [2024-11-20 04:58:04.903169] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=92c0, Actual=2af4
00:07:51.191  [2024-11-20 04:58:04.903481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1aa753ed, Actual=1ab753ed
00:07:51.191  [2024-11-20 04:58:04.903761] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=d7386b2d, Actual=d7286b2d
00:07:51.191  [2024-11-20 04:58:04.904042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.904338] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.904621] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.191  [2024-11-20 04:58:04.904928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.191  [2024-11-20 04:58:04.905218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1ab753ed, Actual=51c8df1a
00:07:51.191  [2024-11-20 04:58:04.905524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=c202f449, Actual=c562627a
00:07:51.191  [2024-11-20 04:58:04.905832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:51.191  [2024-11-20 04:58:04.906124] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=df0427824c2d4eb8, Actual=df1427824c2d4eb8
00:07:51.191  [2024-11-20 04:58:04.906424] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.906729] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.907000] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.907288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.907577] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:51.191  [2024-11-20 04:58:04.907877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=4828e06b4944356f, Actual=f099fe5ac2abc587
00:07:51.191  passed
00:07:51.191    Test: dix_sec_0_md_8_error ...[2024-11-20 04:58:04.907956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 609:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:07:51.191  passed
00:07:51.191    Test: dix_sec_512_md_0_error ...[2024-11-20 04:58:04.908021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:51.191  passed
00:07:51.191    Test: dix_sec_512_md_16_error ...[2024-11-20 04:58:04.908067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:51.191  [2024-11-20 04:58:04.908103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 620:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:07:51.191  passed
00:07:51.191    Test: dix_sec_4096_md_0_8_error ...[2024-11-20 04:58:04.908147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:51.191  [2024-11-20 04:58:04.908183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:51.191  [2024-11-20 04:58:04.908226] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:51.191  [2024-11-20 04:58:04.908274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 594:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:07:51.191  passed
00:07:51.191    Test: dix_sec_512_md_8_prchk_0_single_iov ...passed
00:07:51.191    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:07:51.191    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:07:51.191    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:07:51.191    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:07:51.191    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:07:51.191    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:07:51.191    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:07:51.191    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:07:51.191    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-11-20 04:58:04.949536] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd5c, Actual=fd4c
00:07:51.191  [2024-11-20 04:58:04.950427] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=bea3, Actual=beb3
00:07:51.191  [2024-11-20 04:58:04.951396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.952393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.953362] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.954253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.955151] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd4c, Actual=8e12
00:07:51.191  [2024-11-20 04:58:04.956029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=92c0, Actual=2af4
00:07:51.191  [2024-11-20 04:58:04.956935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1aa753ed, Actual=1ab753ed
00:07:51.191  [2024-11-20 04:58:04.957895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=8e87cf09, Actual=8e97cf09
00:07:51.191  [2024-11-20 04:58:04.958867] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.960008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.960921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.191  [2024-11-20 04:58:04.961816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.191  [2024-11-20 04:58:04.962700] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1ab753ed, Actual=51c8df1a
00:07:51.191  [2024-11-20 04:58:04.963595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=9bbd506d, Actual=9cddc65e
00:07:51.191  [2024-11-20 04:58:04.964475] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:51.191  [2024-11-20 04:58:04.965355] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=df0427824c2d4eb8, Actual=df1427824c2d4eb8
00:07:51.191  [2024-11-20 04:58:04.966253] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.967134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.191  [2024-11-20 04:58:04.968013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.968898] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.191  [2024-11-20 04:58:04.969773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:51.191  passed
00:07:51.191    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-11-20 04:58:04.970659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=4828e06b4944356f, Actual=f099fe5ac2abc587
00:07:51.191  [2024-11-20 04:58:04.970939] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd5c, Actual=fd4c
00:07:51.191  [2024-11-20 04:58:04.971173] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=bea3, Actual=beb3
00:07:51.191  [2024-11-20 04:58:04.971412] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.971631] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.971848] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.192  [2024-11-20 04:58:04.972074] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.192  [2024-11-20 04:58:04.972291] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=fd4c, Actual=8e12
00:07:51.192  [2024-11-20 04:58:04.972510] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=92c0, Actual=2af4
00:07:51.192  [2024-11-20 04:58:04.972730] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1aa753ed, Actual=1ab753ed
00:07:51.192  [2024-11-20 04:58:04.972946] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=8e87cf09, Actual=8e97cf09
00:07:51.192  [2024-11-20 04:58:04.973159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.973380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.973619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.192  [2024-11-20 04:58:04.973829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=10000000000059
00:07:51.192  [2024-11-20 04:58:04.974048] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=1ab753ed, Actual=51c8df1a
00:07:51.192  [2024-11-20 04:58:04.974267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=9bbd506d, Actual=9cddc65e
00:07:51.192  [2024-11-20 04:58:04.974484] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a566a7728ecc20d3, Actual=a576a7728ecc20d3
00:07:51.192  [2024-11-20 04:58:04.974702] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=df0427824c2d4eb8, Actual=df1427824c2d4eb8
00:07:51.192  [2024-11-20 04:58:04.974931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.975144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 925:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=89,  Expected=88, Actual=98
00:07:51.192  [2024-11-20 04:58:04.975363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.192  [2024-11-20 04:58:04.975605] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 860:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=89, Expected=59, Actual=100059
00:07:51.192  [2024-11-20 04:58:04.975819] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=a576a7728ecc20d3, Actual=d0b0c44ac0888cdb
00:07:51.192  [2024-11-20 04:58:04.976038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 910:_dif_verify: *ERROR*: Failed to compare Guard: LBA=89,  Expected=4828e06b4944356f, Actual=f099fe5ac2abc587
00:07:51.192  passed
00:07:51.192    Test: set_md_interleave_iovs_test ...passed
00:07:51.192    Test: set_md_interleave_iovs_split_test ...passed
00:07:51.192    Test: dif_generate_stream_pi_16_test ...passed
00:07:51.192    Test: dif_generate_stream_test ...passed
00:07:51.192    Test: set_md_interleave_iovs_alignment_test ...passed
00:07:51.192    Test: dif_generate_split_test ...[2024-11-20 04:58:04.981936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1946:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:07:51.192  passed
00:07:51.192    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:07:51.192    Test: dif_verify_split_test ...passed
00:07:51.192    Test: dif_verify_stream_multi_segments_test ...passed
00:07:51.192    Test: update_crc32c_pi_16_test ...passed
00:07:51.192    Test: update_crc32c_test ...passed
00:07:51.192    Test: dif_update_crc32c_split_test ...passed
00:07:51.192    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:07:51.192    Test: get_range_with_md_test ...passed
00:07:51.192    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:07:51.192    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:07:51.192    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:51.192    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:07:51.192    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:07:51.192    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:07:51.192    Test: dif_generate_and_verify_unmap_test ...passed
00:07:51.192    Test: dif_pi_format_check_test ...passed
00:07:51.192    Test: dif_type_check_test ...passed
00:07:51.192  
00:07:51.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.192                suites      1      1    n/a      0        0
00:07:51.192                 tests     90     90     90      0        0
00:07:51.192               asserts   3705   3705   3705      0      n/a
00:07:51.192  
00:07:51.192  Elapsed time =    0.347 seconds
00:07:51.192   04:58:05 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:07:51.192  
00:07:51.192  
00:07:51.192       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.192       http://cunit.sourceforge.net/
00:07:51.192  
00:07:51.192  
00:07:51.192  Suite: iov
00:07:51.192    Test: test_single_iov ...passed
00:07:51.192    Test: test_simple_iov ...passed
00:07:51.192    Test: test_complex_iov ...passed
00:07:51.192    Test: test_iovs_to_buf ...passed
00:07:51.192    Test: test_buf_to_iovs ...passed
00:07:51.192    Test: test_memset ...passed
00:07:51.192    Test: test_iov_one ...passed
00:07:51.192    Test: test_iov_xfer ...passed
00:07:51.192  
00:07:51.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.192                suites      1      1    n/a      0        0
00:07:51.192                 tests      8      8      8      0        0
00:07:51.192               asserts    156    156    156      0      n/a
00:07:51.192  
00:07:51.192  Elapsed time =    0.000 seconds
00:07:51.192   04:58:05 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:07:51.192  
00:07:51.192  
00:07:51.192       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.192       http://cunit.sourceforge.net/
00:07:51.192  
00:07:51.192  
00:07:51.192  Suite: math
00:07:51.192    Test: test_serial_number_arithmetic ...passed
00:07:51.192  Suite: erase
00:07:51.192    Test: test_memset_s ...passed
00:07:51.192  
00:07:51.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.192                suites      2      2    n/a      0        0
00:07:51.192                 tests      2      2      2      0        0
00:07:51.192               asserts     18     18     18      0      n/a
00:07:51.192  
00:07:51.192  Elapsed time =    0.000 seconds
00:07:51.192   04:58:05 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:07:51.192  
00:07:51.192  
00:07:51.192       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.192       http://cunit.sourceforge.net/
00:07:51.192  
00:07:51.192  
00:07:51.192  Suite: pipe
00:07:51.192    Test: test_create_destroy ...passed
00:07:51.192    Test: test_write_get_buffer ...passed
00:07:51.192    Test: test_write_advance ...passed
00:07:51.192    Test: test_read_get_buffer ...passed
00:07:51.192    Test: test_read_advance ...passed
00:07:51.192    Test: test_data ...passed
00:07:51.192  
00:07:51.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.192                suites      1      1    n/a      0        0
00:07:51.192                 tests      6      6      6      0        0
00:07:51.192               asserts    251    251    251      0      n/a
00:07:51.192  
00:07:51.192  Elapsed time =    0.000 seconds
00:07:51.192    04:58:05 unittest.unittest_util -- unit/unittest.sh@146 -- # uname -s
00:07:51.192   04:58:05 unittest.unittest_util -- unit/unittest.sh@146 -- # '[' Linux = Linux ']'
00:07:51.192   04:58:05 unittest.unittest_util -- unit/unittest.sh@147 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/fd_group.c/fd_group_ut
00:07:51.192  
00:07:51.192  
00:07:51.192       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.192       http://cunit.sourceforge.net/
00:07:51.192  
00:07:51.192  
00:07:51.192  Suite: fd_group
00:07:51.192    Test: test_fd_group_basic ...passed
00:07:51.192    Test: test_fd_group_nest_unnest ...passed
00:07:51.192  
00:07:51.192  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.192                suites      1      1    n/a      0        0
00:07:51.192                 tests      2      2      2      0        0
00:07:51.192               asserts     41     41     41      0      n/a
00:07:51.192  
00:07:51.192  Elapsed time =    0.000 seconds
00:07:51.192  
00:07:51.192  real	0m0.710s
00:07:51.192  user	0m0.578s
00:07:51.192  sys	0m0.134s
00:07:51.192   04:58:05 unittest.unittest_util -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.192  ************************************
00:07:51.192  END TEST unittest_util
00:07:51.192  ************************************
00:07:51.192   04:58:05 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x
00:07:51.451   04:58:05 unittest -- unit/unittest.sh@263 -- # [[ y == y ]]
00:07:51.451   04:58:05 unittest -- unit/unittest.sh@264 -- # run_test unittest_fsdev unittest_fsdev
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:51.451  ************************************
00:07:51.451  START TEST unittest_fsdev
00:07:51.451  ************************************
00:07:51.451   04:58:05 unittest.unittest_fsdev -- common/autotest_common.sh@1129 -- # unittest_fsdev
00:07:51.451   04:58:05 unittest.unittest_fsdev -- unit/unittest.sh@152 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/fsdev/fsdev.c/fsdev_ut
00:07:51.451  
00:07:51.451  
00:07:51.451       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.451       http://cunit.sourceforge.net/
00:07:51.451  
00:07:51.451  
00:07:51.451  Suite: fsdev
00:07:51.451    Test: ut_fsdev_test_open_close ...passed
00:07:51.451    Test: ut_fsdev_test_set_opts ...[2024-11-20 04:58:05.190752] fsdev.c: 631:spdk_fsdev_set_opts: *ERROR*: opts cannot be NULL
00:07:51.451  [2024-11-20 04:58:05.191025] fsdev.c: 636:spdk_fsdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:07:51.451  passed
00:07:51.451    Test: ut_fsdev_test_get_io_channel ...passed
00:07:51.451    Test: ut_fsdev_test_mount_ok ...passed
00:07:51.451    Test: ut_fsdev_test_mount_err ...passed
00:07:51.451    Test: ut_fsdev_test_umount ...passed
00:07:51.451    Test: ut_fsdev_test_lookup_ok ...passed
00:07:51.451    Test: ut_fsdev_test_lookup_err ...passed
00:07:51.451    Test: ut_fsdev_test_forget ...passed
00:07:51.451    Test: ut_fsdev_test_getattr ...passed
00:07:51.451    Test: ut_fsdev_test_setattr ...passed
00:07:51.451    Test: ut_fsdev_test_readlink ...passed
00:07:51.451    Test: ut_fsdev_test_symlink ...passed
00:07:51.451    Test: ut_fsdev_test_mknod ...passed
00:07:51.451    Test: ut_fsdev_test_mkdir ...passed
00:07:51.451    Test: ut_fsdev_test_unlink ...passed
00:07:51.451    Test: ut_fsdev_test_rmdir ...passed
00:07:51.451    Test: ut_fsdev_test_rename ...passed
00:07:51.451    Test: ut_fsdev_test_link ...passed
00:07:51.451    Test: ut_fsdev_test_fopen ...passed
00:07:51.451    Test: ut_fsdev_test_read ...passed
00:07:51.451    Test: ut_fsdev_test_write ...passed
00:07:51.451    Test: ut_fsdev_test_statfs ...passed
00:07:51.451    Test: ut_fsdev_test_release ...passed
00:07:51.451    Test: ut_fsdev_test_fsync ...passed
00:07:51.451    Test: ut_fsdev_test_getxattr ...passed
00:07:51.451    Test: ut_fsdev_test_setxattr ...passed
00:07:51.451    Test: ut_fsdev_test_listxattr ...passed
00:07:51.451    Test: ut_fsdev_test_listxattr_get_size ...passed
00:07:51.451    Test: ut_fsdev_test_removexattr ...passed
00:07:51.451    Test: ut_fsdev_test_flush ...passed
00:07:51.451    Test: ut_fsdev_test_opendir ...passed
00:07:51.451    Test: ut_fsdev_test_readdir ...passed
00:07:51.451    Test: ut_fsdev_test_releasedir ...passed
00:07:51.451    Test: ut_fsdev_test_fsyncdir ...passed
00:07:51.451    Test: ut_fsdev_test_flock ...passed
00:07:51.451    Test: ut_fsdev_test_create ...passed
00:07:51.451    Test: ut_fsdev_test_abort ...passed
00:07:51.451    Test: ut_fsdev_test_fallocate ...passed
00:07:51.451    Test: ut_fsdev_test_copy_file_range ...passed
00:07:51.451  
00:07:51.451  [2024-11-20 04:58:05.234534] fsdev.c: 354:fsdev_mgr_unregister_cb: *ERROR*: fsdev IO pool count is 65535 but should be 131070
00:07:51.451  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.451                suites      1      1    n/a      0        0
00:07:51.451                 tests     40     40     40      0        0
00:07:51.451               asserts   2840   2840   2840      0      n/a
00:07:51.451  
00:07:51.451  Elapsed time =    0.044 seconds
00:07:51.451  
00:07:51.451  real	0m0.083s
00:07:51.451  user	0m0.040s
00:07:51.451  sys	0m0.043s
00:07:51.451   04:58:05 unittest.unittest_fsdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.451   04:58:05 unittest.unittest_fsdev -- common/autotest_common.sh@10 -- # set +x
00:07:51.451  ************************************
00:07:51.451  END TEST unittest_fsdev
00:07:51.451  ************************************
00:07:51.451   04:58:05 unittest -- unit/unittest.sh@266 -- # [[ y == y ]]
00:07:51.451   04:58:05 unittest -- unit/unittest.sh@267 -- # run_test unittest_vhost /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.451   04:58:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:51.451  ************************************
00:07:51.451  START TEST unittest_vhost
00:07:51.451  ************************************
00:07:51.451   04:58:05 unittest.unittest_vhost -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/vhost/vhost.c/vhost_ut
00:07:51.451  
00:07:51.451  
00:07:51.451       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.451       http://cunit.sourceforge.net/
00:07:51.451  
00:07:51.451  
00:07:51.451  Suite: vhost_suite
00:07:51.451    Test: desc_to_iov_test ...[2024-11-20 04:58:05.318744] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c: 620:vhost_vring_desc_payload_to_iov: *ERROR*: SPDK_VHOST_IOVS_MAX(129) reached
00:07:51.451  passed
00:07:51.451    Test: create_controller_test ...[2024-11-20 04:58:05.324173] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:51.451  [2024-11-20 04:58:05.324325] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xf0 is invalid (core mask is 0xf)
00:07:51.451  [2024-11-20 04:58:05.324506] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c:  84:vhost_parse_core_mask: *ERROR*: one of selected cpu is outside of core mask(=f)
00:07:51.451  [2024-11-20 04:58:05.324627] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 130:vhost_dev_register: *ERROR*: cpumask 0xff is invalid (core mask is 0xf)
00:07:51.451  [2024-11-20 04:58:05.324700] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 125:vhost_dev_register: *ERROR*: Can't register controller with no name
00:07:51.452  [2024-11-20 04:58:05.325233] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1781:vhost_user_dev_init: *ERROR*: Resulting socket path for controller xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx is too long: some_path/xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
00:07:51.452  passed
00:07:51.452    Test: session_find_by_vid_test ...[2024-11-20 04:58:05.326361] /home/vagrant/spdk_repo/spdk/lib/vhost/vhost.c: 141:vhost_dev_register: *ERROR*: vhost controller vdev_name_0 already exists.
00:07:51.452  passed
00:07:51.452    Test: remove_controller_test ...[2024-11-20 04:58:05.328687] /home/vagrant/spdk_repo/spdk/lib/vhost/rte_vhost_user.c:1869:vhost_user_dev_unregister: *ERROR*: Controller vdev_name_0 has still valid connection.
00:07:51.452  passed
00:07:51.452    Test: vq_avail_ring_get_test ...passed
00:07:51.452    Test: vq_packed_ring_test ...passed
00:07:51.452    Test: vhost_blk_construct_test ...passed
00:07:51.452  
00:07:51.452  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.452                suites      1      1    n/a      0        0
00:07:51.452                 tests      7      7      7      0        0
00:07:51.452               asserts    147    147    147      0      n/a
00:07:51.452  
00:07:51.452  Elapsed time =    0.014 seconds
00:07:51.452  
00:07:51.452  real	0m0.052s
00:07:51.452  user	0m0.028s
00:07:51.452  sys	0m0.023s
00:07:51.452   04:58:05 unittest.unittest_vhost -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.452   04:58:05 unittest.unittest_vhost -- common/autotest_common.sh@10 -- # set +x
00:07:51.452  ************************************
00:07:51.452  END TEST unittest_vhost
00:07:51.452  ************************************
00:07:51.452   04:58:05 unittest -- unit/unittest.sh@269 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:51.452   04:58:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.452   04:58:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.452   04:58:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:51.452  ************************************
00:07:51.452  START TEST unittest_dma
00:07:51.452  ************************************
00:07:51.452   04:58:05 unittest.unittest_dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:07:51.452  
00:07:51.452  
00:07:51.452       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.452       http://cunit.sourceforge.net/
00:07:51.452  
00:07:51.452  
00:07:51.452  Suite: dma_suite
00:07:51.452    Test: test_dma ...[2024-11-20 04:58:05.405796] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  60:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:07:51.710  passed
00:07:51.710  
00:07:51.710  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.710                suites      1      1    n/a      0        0
00:07:51.710                 tests      1      1      1      0        0
00:07:51.710               asserts     54     54     54      0      n/a
00:07:51.710  
00:07:51.710  Elapsed time =    0.000 seconds
00:07:51.710  
00:07:51.710  real	0m0.028s
00:07:51.710  user	0m0.019s
00:07:51.710  sys	0m0.009s
00:07:51.710   04:58:05 unittest.unittest_dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.710   04:58:05 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x
00:07:51.710  ************************************
00:07:51.710  END TEST unittest_dma
00:07:51.710  ************************************
00:07:51.710   04:58:05 unittest -- unit/unittest.sh@271 -- # run_test unittest_init unittest_init
00:07:51.710   04:58:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.710   04:58:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.710   04:58:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:51.710  ************************************
00:07:51.710  START TEST unittest_init
00:07:51.711  ************************************
00:07:51.711   04:58:05 unittest.unittest_init -- common/autotest_common.sh@1129 -- # unittest_init
00:07:51.711   04:58:05 unittest.unittest_init -- unit/unittest.sh@156 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:07:51.711  
00:07:51.711  
00:07:51.711       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.711       http://cunit.sourceforge.net/
00:07:51.711  
00:07:51.711  
00:07:51.711  Suite: subsystem_suite
00:07:51.711    Test: subsystem_sort_test_depends_on_single ...passed
00:07:51.711    Test: subsystem_sort_test_depends_on_multiple ...passed
00:07:51.711    Test: subsystem_sort_test_missing_dependency ...[2024-11-20 04:58:05.473934] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 196:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:07:51.711  passed
00:07:51.711  
00:07:51.711  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.711                suites      1      1    n/a      0        0
00:07:51.711                 tests      3      3      3      0        0
00:07:51.711               asserts     20     20     20      0      n/a
00:07:51.711  
00:07:51.711  Elapsed time =    0.001 seconds
00:07:51.711  [2024-11-20 04:58:05.474294] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:07:51.711  
00:07:51.711  real	0m0.032s
00:07:51.711  user	0m0.024s
00:07:51.711  sys	0m0.008s
00:07:51.711   04:58:05 unittest.unittest_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.711   04:58:05 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x
00:07:51.711  ************************************
00:07:51.711  END TEST unittest_init
00:07:51.711  ************************************
00:07:51.711   04:58:05 unittest -- unit/unittest.sh@272 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:51.711   04:58:05 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.711   04:58:05 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.711   04:58:05 unittest -- common/autotest_common.sh@10 -- # set +x
00:07:51.711  ************************************
00:07:51.711  START TEST unittest_keyring
00:07:51.711  ************************************
00:07:51.711   04:58:05 unittest.unittest_keyring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:07:51.711  
00:07:51.711  
00:07:51.711       CUnit - A unit testing framework for C - Version 2.1-3
00:07:51.711       http://cunit.sourceforge.net/
00:07:51.711  
00:07:51.711  
00:07:51.711  Suite: keyring
00:07:51.711    Test: test_keyring_add_remove ...[2024-11-20 04:58:05.540970] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists
00:07:51.711  passed
00:07:51.711    Test: test_keyring_get_put ...passed
00:07:51.711  
00:07:51.711  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:51.711                suites      1      1    n/a      0        0
00:07:51.711                 tests      2      2      2      0        0
00:07:51.711               asserts     46     46     46      0      n/a
00:07:51.711  
00:07:51.711  Elapsed time =    0.001 seconds
00:07:51.711  [2024-11-20 04:58:05.541614] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists
00:07:51.711  [2024-11-20 04:58:05.541706] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 168:spdk_keyring_remove_key: *ERROR*: Key 'key0' is not owned by module 'ut2'
00:07:51.711  [2024-11-20 04:58:05.541818] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key 'key0' does not exist
00:07:51.711  [2024-11-20 04:58:05.541860] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key ':key0' does not exist
00:07:51.711  [2024-11-20 04:58:05.541921] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:07:51.711  
00:07:51.711  real	0m0.027s
00:07:51.711  user	0m0.008s
00:07:51.711  sys	0m0.019s
00:07:51.711   04:58:05 unittest.unittest_keyring -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.711   04:58:05 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x
00:07:51.711  ************************************
00:07:51.711  END TEST unittest_keyring
00:07:51.711  ************************************
00:07:51.711   04:58:05 unittest -- unit/unittest.sh@274 -- # [[ y == y ]]
00:07:51.711    04:58:05 unittest -- unit/unittest.sh@275 -- # hostname
00:07:51.711   04:58:05 unittest -- unit/unittest.sh@275 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -d . -c --no-external -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:07:51.969  geninfo: WARNING: invalid characters removed from testname!
00:08:24.044   04:58:34 unittest -- unit/unittest.sh@276 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:08:25.947   04:58:39 unittest -- unit/unittest.sh@277 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:29.262   04:58:42 unittest -- unit/unittest.sh@278 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:32.544   04:58:45 unittest -- unit/unittest.sh@279 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:35.825   04:58:49 unittest -- unit/unittest.sh@280 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:38.359   04:58:52 unittest -- unit/unittest.sh@281 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:40.893   04:58:54 unittest -- unit/unittest.sh@282 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:08:40.893   04:58:54 unittest -- unit/unittest.sh@283 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:41.460  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:08:41.460  Found 334 entries.
00:08:41.460  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:08:41.460  Writing .css and .png files.
00:08:41.460  Generating output.
00:08:41.460  Processing file include/linux/virtio_ring.h
00:08:41.719  Processing file include/spdk/base64.h
00:08:41.719  Processing file include/spdk/bdev_module.h
00:08:41.719  Processing file include/spdk/nvme_spec.h
00:08:41.719  Processing file include/spdk/mmio.h
00:08:41.719  Processing file include/spdk/nvmf_transport.h
00:08:41.719  Processing file include/spdk/fsdev_module.h
00:08:41.719  Processing file include/spdk/histogram_data.h
00:08:41.719  Processing file include/spdk/endian.h
00:08:41.719  Processing file include/spdk/util.h
00:08:41.719  Processing file include/spdk/trace.h
00:08:41.719  Processing file include/spdk/thread.h
00:08:41.719  Processing file include/spdk/nvme.h
00:08:41.977  Processing file include/spdk_internal/nvme_tcp.h
00:08:41.977  Processing file include/spdk_internal/sgl.h
00:08:41.977  Processing file include/spdk_internal/virtio.h
00:08:41.977  Processing file include/spdk_internal/sock.h
00:08:41.977  Processing file include/spdk_internal/rdma_utils.h
00:08:41.977  Processing file include/spdk_internal/utf.h
00:08:42.236  Processing file lib/accel/accel_sw.c
00:08:42.236  Processing file lib/accel/accel_rpc.c
00:08:42.236  Processing file lib/accel/accel.c
00:08:42.494  Processing file lib/bdev/bdev_rpc.c
00:08:42.495  Processing file lib/bdev/bdev_zone.c
00:08:42.495  Processing file lib/bdev/bdev.c
00:08:42.495  Processing file lib/bdev/scsi_nvme.c
00:08:42.495  Processing file lib/bdev/part.c
00:08:42.754  Processing file lib/blob/zeroes.c
00:08:42.754  Processing file lib/blob/blob_bs_dev.c
00:08:42.754  Processing file lib/blob/blobstore.c
00:08:42.754  Processing file lib/blob/blobstore.h
00:08:42.754  Processing file lib/blob/request.c
00:08:42.754  Processing file lib/blobfs/tree.c
00:08:42.754  Processing file lib/blobfs/blobfs.c
00:08:42.754  Processing file lib/conf/conf.c
00:08:42.754  Processing file lib/dma/dma.c
00:08:43.322  Processing file lib/env_dpdk/pci_dpdk.c
00:08:43.322  Processing file lib/env_dpdk/pci_vmd.c
00:08:43.322  Processing file lib/env_dpdk/env.c
00:08:43.322  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:08:43.322  Processing file lib/env_dpdk/pci_virtio.c
00:08:43.322  Processing file lib/env_dpdk/memory.c
00:08:43.322  Processing file lib/env_dpdk/threads.c
00:08:43.322  Processing file lib/env_dpdk/pci_ioat.c
00:08:43.322  Processing file lib/env_dpdk/sigbus_handler.c
00:08:43.322  Processing file lib/env_dpdk/pci.c
00:08:43.322  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:08:43.322  Processing file lib/env_dpdk/init.c
00:08:43.322  Processing file lib/env_dpdk/pci_event.c
00:08:43.322  Processing file lib/env_dpdk/pci_idxd.c
00:08:43.322  Processing file lib/event/log_rpc.c
00:08:43.322  Processing file lib/event/scheduler_static.c
00:08:43.322  Processing file lib/event/app_rpc.c
00:08:43.322  Processing file lib/event/app.c
00:08:43.322  Processing file lib/event/reactor.c
00:08:43.322  Processing file lib/fsdev/fsdev.c
00:08:43.322  Processing file lib/fsdev/fsdev_io.c
00:08:43.322  Processing file lib/fsdev/fsdev_rpc.c
00:08:43.889  Processing file lib/ftl/ftl_band.h
00:08:43.889  Processing file lib/ftl/ftl_sb.c
00:08:43.889  Processing file lib/ftl/ftl_writer.h
00:08:43.889  Processing file lib/ftl/ftl_core.c
00:08:43.889  Processing file lib/ftl/ftl_core.h
00:08:43.889  Processing file lib/ftl/ftl_band.c
00:08:43.889  Processing file lib/ftl/ftl_p2l.c
00:08:43.889  Processing file lib/ftl/ftl_trace.c
00:08:43.889  Processing file lib/ftl/ftl_l2p.c
00:08:43.889  Processing file lib/ftl/ftl_layout.c
00:08:43.889  Processing file lib/ftl/ftl_writer.c
00:08:43.889  Processing file lib/ftl/ftl_l2p_flat.c
00:08:43.889  Processing file lib/ftl/ftl_nv_cache_io.h
00:08:43.889  Processing file lib/ftl/ftl_rq.c
00:08:43.889  Processing file lib/ftl/ftl_init.c
00:08:43.889  Processing file lib/ftl/ftl_reloc.c
00:08:43.889  Processing file lib/ftl/ftl_io.c
00:08:43.889  Processing file lib/ftl/ftl_p2l_log.c
00:08:43.889  Processing file lib/ftl/ftl_debug.c
00:08:43.889  Processing file lib/ftl/ftl_io.h
00:08:43.889  Processing file lib/ftl/ftl_nv_cache.h
00:08:43.889  Processing file lib/ftl/ftl_nv_cache.c
00:08:43.889  Processing file lib/ftl/ftl_band_ops.c
00:08:43.889  Processing file lib/ftl/ftl_debug.h
00:08:43.889  Processing file lib/ftl/ftl_l2p_cache.c
00:08:43.889  Processing file lib/ftl/base/ftl_base_bdev.c
00:08:43.889  Processing file lib/ftl/base/ftl_base_dev.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_band.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_self_test.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_startup.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_l2p.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_upgrade.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_recovery.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_ioch.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_shutdown.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_bdev.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_misc.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_md.c
00:08:44.148  Processing file lib/ftl/mngt/ftl_mngt_p2l.c
00:08:44.406  Processing file lib/ftl/nvc/ftl_nvc_bdev_vss.c
00:08:44.406  Processing file lib/ftl/nvc/ftl_nvc_bdev_non_vss.c
00:08:44.406  Processing file lib/ftl/nvc/ftl_nvc_bdev_common.c
00:08:44.406  Processing file lib/ftl/nvc/ftl_nvc_dev.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_sb_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_band_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_p2l_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_trim_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_sb_v5.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_chunk_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_layout_upgrade.c
00:08:44.406  Processing file lib/ftl/upgrade/ftl_sb_v3.c
00:08:44.665  Processing file lib/ftl/utils/ftl_layout_tracker_bdev.c
00:08:44.665  Processing file lib/ftl/utils/ftl_property.h
00:08:44.665  Processing file lib/ftl/utils/ftl_addr_utils.h
00:08:44.665  Processing file lib/ftl/utils/ftl_df.h
00:08:44.665  Processing file lib/ftl/utils/ftl_bitmap.c
00:08:44.665  Processing file lib/ftl/utils/ftl_property.c
00:08:44.665  Processing file lib/ftl/utils/ftl_mempool.c
00:08:44.665  Processing file lib/ftl/utils/ftl_conf.c
00:08:44.665  Processing file lib/ftl/utils/ftl_md.c
00:08:44.923  Processing file lib/fuse_dispatcher/fuse_dispatcher.c
00:08:44.923  Processing file lib/idxd/idxd_user.c
00:08:44.923  Processing file lib/idxd/idxd_internal.h
00:08:44.923  Processing file lib/idxd/idxd.c
00:08:44.923  Processing file lib/init/rpc.c
00:08:44.923  Processing file lib/init/subsystem.c
00:08:44.923  Processing file lib/init/subsystem_rpc.c
00:08:44.923  Processing file lib/init/json_config.c
00:08:45.182  Processing file lib/ioat/ioat_internal.h
00:08:45.182  Processing file lib/ioat/ioat.c
00:08:45.440  Processing file lib/iscsi/iscsi_rpc.c
00:08:45.440  Processing file lib/iscsi/tgt_node.c
00:08:45.440  Processing file lib/iscsi/iscsi.c
00:08:45.440  Processing file lib/iscsi/init_grp.c
00:08:45.440  Processing file lib/iscsi/iscsi.h
00:08:45.440  Processing file lib/iscsi/param.c
00:08:45.440  Processing file lib/iscsi/task.c
00:08:45.440  Processing file lib/iscsi/conn.c
00:08:45.440  Processing file lib/iscsi/iscsi_subsystem.c
00:08:45.440  Processing file lib/iscsi/task.h
00:08:45.440  Processing file lib/iscsi/portal_grp.c
00:08:45.698  Processing file lib/json/json_util.c
00:08:45.698  Processing file lib/json/json_parse.c
00:08:45.698  Processing file lib/json/json_write.c
00:08:45.698  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:08:45.698  Processing file lib/jsonrpc/jsonrpc_client.c
00:08:45.698  Processing file lib/jsonrpc/jsonrpc_server.c
00:08:45.698  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:08:45.698  Processing file lib/keyring/keyring_rpc.c
00:08:45.698  Processing file lib/keyring/keyring.c
00:08:45.956  Processing file lib/log/log_deprecated.c
00:08:45.956  Processing file lib/log/log_flags.c
00:08:45.956  Processing file lib/log/log.c
00:08:45.956  Processing file lib/lvol/lvol.c
00:08:45.956  Processing file lib/nbd/nbd.c
00:08:45.956  Processing file lib/nbd/nbd_rpc.c
00:08:46.214  Processing file lib/notify/notify_rpc.c
00:08:46.214  Processing file lib/notify/notify.c
00:08:46.781  Processing file lib/nvme/nvme_pcie_internal.h
00:08:46.781  Processing file lib/nvme/nvme_poll_group.c
00:08:46.781  Processing file lib/nvme/nvme_rdma.c
00:08:46.781  Processing file lib/nvme/nvme_ctrlr.c
00:08:46.781  Processing file lib/nvme/nvme_tcp.c
00:08:46.781  Processing file lib/nvme/nvme_auth.c
00:08:46.781  Processing file lib/nvme/nvme_internal.h
00:08:46.781  Processing file lib/nvme/nvme_ns_cmd.c
00:08:46.781  Processing file lib/nvme/nvme_opal.c
00:08:46.782  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:08:46.782  Processing file lib/nvme/nvme_io_msg.c
00:08:46.782  Processing file lib/nvme/nvme_qpair.c
00:08:46.782  Processing file lib/nvme/nvme_cuse.c
00:08:46.782  Processing file lib/nvme/nvme.c
00:08:46.782  Processing file lib/nvme/nvme_pcie_common.c
00:08:46.782  Processing file lib/nvme/nvme_transport.c
00:08:46.782  Processing file lib/nvme/nvme_pcie.c
00:08:46.782  Processing file lib/nvme/nvme_fabric.c
00:08:46.782  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:08:46.782  Processing file lib/nvme/nvme_ns.c
00:08:46.782  Processing file lib/nvme/nvme_discovery.c
00:08:46.782  Processing file lib/nvme/nvme_quirks.c
00:08:46.782  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:08:46.782  Processing file lib/nvme/nvme_zns.c
00:08:47.379  Processing file lib/nvmf/nvmf_rpc.c
00:08:47.379  Processing file lib/nvmf/nvmf_internal.h
00:08:47.379  Processing file lib/nvmf/nvmf.c
00:08:47.379  Processing file lib/nvmf/ctrlr.c
00:08:47.379  Processing file lib/nvmf/auth.c
00:08:47.379  Processing file lib/nvmf/tcp.c
00:08:47.379  Processing file lib/nvmf/subsystem.c
00:08:47.379  Processing file lib/nvmf/ctrlr_discovery.c
00:08:47.379  Processing file lib/nvmf/stubs.c
00:08:47.379  Processing file lib/nvmf/rdma.c
00:08:47.379  Processing file lib/nvmf/transport.c
00:08:47.379  Processing file lib/nvmf/ctrlr_bdev.c
00:08:47.379  Processing file lib/rdma_provider/common.c
00:08:47.379  Processing file lib/rdma_provider/rdma_provider_verbs.c
00:08:47.651  Processing file lib/rdma_utils/rdma_utils.c
00:08:47.651  Processing file lib/rpc/rpc.c
00:08:47.909  Processing file lib/scsi/scsi_rpc.c
00:08:47.909  Processing file lib/scsi/port.c
00:08:47.909  Processing file lib/scsi/scsi_bdev.c
00:08:47.909  Processing file lib/scsi/task.c
00:08:47.909  Processing file lib/scsi/dev.c
00:08:47.909  Processing file lib/scsi/lun.c
00:08:47.909  Processing file lib/scsi/scsi_pr.c
00:08:47.909  Processing file lib/scsi/scsi.c
00:08:47.909  Processing file lib/sock/sock_rpc.c
00:08:47.909  Processing file lib/sock/sock.c
00:08:47.909  Processing file lib/thread/iobuf.c
00:08:47.909  Processing file lib/thread/thread.c
00:08:48.168  Processing file lib/trace/trace_rpc.c
00:08:48.168  Processing file lib/trace/trace.c
00:08:48.168  Processing file lib/trace/trace_flags.c
00:08:48.168  Processing file lib/trace_parser/trace.cpp
00:08:48.168  Processing file lib/ut/ut.c
00:08:48.426  Processing file lib/ut_mock/mock.c
00:08:48.685  Processing file lib/util/fd.c
00:08:48.685  Processing file lib/util/string.c
00:08:48.685  Processing file lib/util/crc16.c
00:08:48.685  Processing file lib/util/crc32c.c
00:08:48.685  Processing file lib/util/crc32.c
00:08:48.685  Processing file lib/util/uuid.c
00:08:48.685  Processing file lib/util/file.c
00:08:48.685  Processing file lib/util/crc64.c
00:08:48.685  Processing file lib/util/xor.c
00:08:48.685  Processing file lib/util/base64.c
00:08:48.685  Processing file lib/util/hexlify.c
00:08:48.685  Processing file lib/util/math.c
00:08:48.685  Processing file lib/util/net.c
00:08:48.685  Processing file lib/util/fd_group.c
00:08:48.685  Processing file lib/util/bit_array.c
00:08:48.685  Processing file lib/util/crc32_ieee.c
00:08:48.685  Processing file lib/util/zipf.c
00:08:48.685  Processing file lib/util/strerror_tls.c
00:08:48.685  Processing file lib/util/cpuset.c
00:08:48.685  Processing file lib/util/iov.c
00:08:48.685  Processing file lib/util/md5.c
00:08:48.685  Processing file lib/util/pipe.c
00:08:48.685  Processing file lib/util/dif.c
00:08:48.685  Processing file lib/vfio_user/host/vfio_user.c
00:08:48.685  Processing file lib/vfio_user/host/vfio_user_pci.c
00:08:48.943  Processing file lib/vhost/rte_vhost_user.c
00:08:48.943  Processing file lib/vhost/vhost_internal.h
00:08:48.943  Processing file lib/vhost/vhost_blk.c
00:08:48.943  Processing file lib/vhost/vhost.c
00:08:48.943  Processing file lib/vhost/vhost_rpc.c
00:08:48.943  Processing file lib/vhost/vhost_scsi.c
00:08:49.200  Processing file lib/virtio/virtio_vfio_user.c
00:08:49.200  Processing file lib/virtio/virtio.c
00:08:49.200  Processing file lib/virtio/virtio_pci.c
00:08:49.200  Processing file lib/virtio/virtio_vhost_user.c
00:08:49.200  Processing file lib/vmd/led.c
00:08:49.200  Processing file lib/vmd/vmd.c
00:08:49.200  Processing file module/accel/dsa/accel_dsa.c
00:08:49.200  Processing file module/accel/dsa/accel_dsa_rpc.c
00:08:49.458  Processing file module/accel/error/accel_error_rpc.c
00:08:49.458  Processing file module/accel/error/accel_error.c
00:08:49.458  Processing file module/accel/iaa/accel_iaa_rpc.c
00:08:49.458  Processing file module/accel/iaa/accel_iaa.c
00:08:49.458  Processing file module/accel/ioat/accel_ioat_rpc.c
00:08:49.458  Processing file module/accel/ioat/accel_ioat.c
00:08:49.715  Processing file module/bdev/aio/bdev_aio_rpc.c
00:08:49.715  Processing file module/bdev/aio/bdev_aio.c
00:08:49.715  Processing file module/bdev/delay/vbdev_delay.c
00:08:49.715  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:08:49.715  Processing file module/bdev/error/vbdev_error_rpc.c
00:08:49.715  Processing file module/bdev/error/vbdev_error.c
00:08:49.974  Processing file module/bdev/ftl/bdev_ftl.c
00:08:49.974  Processing file module/bdev/ftl/bdev_ftl_rpc.c
00:08:49.974  Processing file module/bdev/gpt/vbdev_gpt.c
00:08:49.974  Processing file module/bdev/gpt/gpt.c
00:08:49.974  Processing file module/bdev/gpt/gpt.h
00:08:49.974  Processing file module/bdev/iscsi/bdev_iscsi_rpc.c
00:08:49.974  Processing file module/bdev/iscsi/bdev_iscsi.c
00:08:50.232  Processing file module/bdev/lvol/vbdev_lvol.c
00:08:50.232  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:08:50.232  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:08:50.232  Processing file module/bdev/malloc/bdev_malloc.c
00:08:50.232  Processing file module/bdev/null/bdev_null.c
00:08:50.232  Processing file module/bdev/null/bdev_null_rpc.c
00:08:50.798  Processing file module/bdev/nvme/bdev_mdns_client.c
00:08:50.798  Processing file module/bdev/nvme/nvme_rpc.c
00:08:50.798  Processing file module/bdev/nvme/bdev_nvme_cuse_rpc.c
00:08:50.798  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:08:50.798  Processing file module/bdev/nvme/vbdev_opal.c
00:08:50.798  Processing file module/bdev/nvme/vbdev_opal_rpc.c
00:08:50.798  Processing file module/bdev/nvme/bdev_nvme.c
00:08:50.798  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:08:50.798  Processing file module/bdev/passthru/vbdev_passthru.c
00:08:50.798  Processing file module/bdev/raid/raid0.c
00:08:50.798  Processing file module/bdev/raid/bdev_raid.c
00:08:50.798  Processing file module/bdev/raid/bdev_raid_rpc.c
00:08:50.798  Processing file module/bdev/raid/bdev_raid.h
00:08:50.798  Processing file module/bdev/raid/bdev_raid_sb.c
00:08:50.798  Processing file module/bdev/raid/raid1.c
00:08:50.798  Processing file module/bdev/raid/concat.c
00:08:51.057  Processing file module/bdev/split/vbdev_split_rpc.c
00:08:51.057  Processing file module/bdev/split/vbdev_split.c
00:08:51.057  Processing file module/bdev/virtio/bdev_virtio_rpc.c
00:08:51.057  Processing file module/bdev/virtio/bdev_virtio_scsi.c
00:08:51.057  Processing file module/bdev/virtio/bdev_virtio_blk.c
00:08:51.315  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:08:51.315  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:08:51.315  Processing file module/blob/bdev/blob_bdev.c
00:08:51.315  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:08:51.315  Processing file module/blobfs/bdev/blobfs_bdev.c
00:08:51.315  Processing file module/env_dpdk/env_dpdk_rpc.c
00:08:51.573  Processing file module/event/subsystems/accel/accel.c
00:08:51.573  Processing file module/event/subsystems/bdev/bdev.c
00:08:51.573  Processing file module/event/subsystems/fsdev/fsdev.c
00:08:51.573  Processing file module/event/subsystems/iobuf/iobuf.c
00:08:51.573  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:08:51.832  Processing file module/event/subsystems/iscsi/iscsi.c
00:08:51.832  Processing file module/event/subsystems/keyring/keyring.c
00:08:51.832  Processing file module/event/subsystems/nbd/nbd.c
00:08:52.090  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:08:52.090  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:08:52.090  Processing file module/event/subsystems/scheduler/scheduler.c
00:08:52.090  Processing file module/event/subsystems/scsi/scsi.c
00:08:52.090  Processing file module/event/subsystems/sock/sock.c
00:08:52.090  Processing file module/event/subsystems/vhost_blk/vhost_blk.c
00:08:52.349  Processing file module/event/subsystems/vhost_scsi/vhost_scsi.c
00:08:52.349  Processing file module/event/subsystems/vmd/vmd.c
00:08:52.349  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:08:52.349  Processing file module/fsdev/aio/fsdev_aio_rpc.c
00:08:52.349  Processing file module/fsdev/aio/linux_aio_mgr.c
00:08:52.349  Processing file module/fsdev/aio/fsdev_aio.c
00:08:52.608  Processing file module/keyring/file/keyring.c
00:08:52.608  Processing file module/keyring/file/keyring_rpc.c
00:08:52.608  Processing file module/keyring/linux/keyring.c
00:08:52.608  Processing file module/keyring/linux/keyring_rpc.c
00:08:52.608  Processing file module/scheduler/dpdk_governor/dpdk_governor.c
00:08:52.867  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:08:52.867  Processing file module/scheduler/gscheduler/gscheduler.c
00:08:52.867  Processing file module/sock/posix/posix.c
00:08:52.867  Writing directory view page.
00:08:52.867  Overall coverage rate:
00:08:52.867    lines......: 37.7% (42190 of 112039 lines)
00:08:52.867    functions..: 41.4% (3906 of 9440 functions)
00:08:52.867  Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:08:52.867  
00:08:52.867  
00:08:52.867  =====================
00:08:52.867  All unit tests passed
00:08:52.867  =====================
00:08:52.867  
00:08:52.867  
00:08:52.867   04:59:06 unittest -- unit/unittest.sh@284 -- # echo 'Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage'
00:08:52.867   04:59:06 unittest -- unit/unittest.sh@287 -- # set +x
00:08:52.867  
00:08:52.867  real	2m12.033s
00:08:52.867  user	1m48.143s
00:08:52.867  sys	0m13.425s
00:08:52.867   04:59:06 unittest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:52.867   04:59:06 unittest -- common/autotest_common.sh@10 -- # set +x
00:08:52.867  ************************************
00:08:52.867  END TEST unittest
00:08:52.867  ************************************
00:08:52.867   04:59:06  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:08:52.867   04:59:06  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:52.867   04:59:06  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:08:52.867   04:59:06  -- spdk/autotest.sh@149 -- # timing_enter lib
00:08:52.867   04:59:06  -- common/autotest_common.sh@726 -- # xtrace_disable
00:08:52.867   04:59:06  -- common/autotest_common.sh@10 -- # set +x
00:08:53.126   04:59:06  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:08:53.126   04:59:06  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:53.126   04:59:06  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:53.126   04:59:06  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.126   04:59:06  -- common/autotest_common.sh@10 -- # set +x
00:08:53.126  ************************************
00:08:53.126  START TEST env
00:08:53.126  ************************************
00:08:53.126   04:59:06 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:08:53.126  * Looking for test storage...
00:08:53.126  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:08:53.126    04:59:06 env -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:53.126     04:59:06 env -- common/autotest_common.sh@1693 -- # lcov --version
00:08:53.126     04:59:06 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:53.126    04:59:06 env -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:53.126    04:59:06 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:53.126    04:59:06 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:53.126    04:59:06 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:53.126    04:59:06 env -- scripts/common.sh@336 -- # IFS=.-:
00:08:53.126    04:59:06 env -- scripts/common.sh@336 -- # read -ra ver1
00:08:53.126    04:59:06 env -- scripts/common.sh@337 -- # IFS=.-:
00:08:53.126    04:59:06 env -- scripts/common.sh@337 -- # read -ra ver2
00:08:53.126    04:59:06 env -- scripts/common.sh@338 -- # local 'op=<'
00:08:53.126    04:59:06 env -- scripts/common.sh@340 -- # ver1_l=2
00:08:53.126    04:59:06 env -- scripts/common.sh@341 -- # ver2_l=1
00:08:53.126    04:59:06 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:53.126    04:59:06 env -- scripts/common.sh@344 -- # case "$op" in
00:08:53.126    04:59:06 env -- scripts/common.sh@345 -- # : 1
00:08:53.126    04:59:06 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:53.126    04:59:06 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:53.126     04:59:07 env -- scripts/common.sh@365 -- # decimal 1
00:08:53.126     04:59:07 env -- scripts/common.sh@353 -- # local d=1
00:08:53.126     04:59:07 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:53.126     04:59:07 env -- scripts/common.sh@355 -- # echo 1
00:08:53.126    04:59:07 env -- scripts/common.sh@365 -- # ver1[v]=1
00:08:53.126     04:59:07 env -- scripts/common.sh@366 -- # decimal 2
00:08:53.126     04:59:07 env -- scripts/common.sh@353 -- # local d=2
00:08:53.126     04:59:07 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:53.126     04:59:07 env -- scripts/common.sh@355 -- # echo 2
00:08:53.126    04:59:07 env -- scripts/common.sh@366 -- # ver2[v]=2
00:08:53.126    04:59:07 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:53.126    04:59:07 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:53.126    04:59:07 env -- scripts/common.sh@368 -- # return 0
00:08:53.126    04:59:07 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:53.126    04:59:07 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:53.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:53.126  		--rc genhtml_branch_coverage=1
00:08:53.126  		--rc genhtml_function_coverage=1
00:08:53.126  		--rc genhtml_legend=1
00:08:53.126  		--rc geninfo_all_blocks=1
00:08:53.126  		--rc geninfo_unexecuted_blocks=1
00:08:53.126  		
00:08:53.126  		'
00:08:53.126    04:59:07 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:53.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:53.126  		--rc genhtml_branch_coverage=1
00:08:53.126  		--rc genhtml_function_coverage=1
00:08:53.126  		--rc genhtml_legend=1
00:08:53.126  		--rc geninfo_all_blocks=1
00:08:53.126  		--rc geninfo_unexecuted_blocks=1
00:08:53.126  		
00:08:53.126  		'
00:08:53.126    04:59:07 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:53.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:53.126  		--rc genhtml_branch_coverage=1
00:08:53.126  		--rc genhtml_function_coverage=1
00:08:53.126  		--rc genhtml_legend=1
00:08:53.126  		--rc geninfo_all_blocks=1
00:08:53.126  		--rc geninfo_unexecuted_blocks=1
00:08:53.126  		
00:08:53.126  		'
00:08:53.126    04:59:07 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:53.126  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:53.126  		--rc genhtml_branch_coverage=1
00:08:53.126  		--rc genhtml_function_coverage=1
00:08:53.126  		--rc genhtml_legend=1
00:08:53.126  		--rc geninfo_all_blocks=1
00:08:53.126  		--rc geninfo_unexecuted_blocks=1
00:08:53.126  		
00:08:53.126  		'
00:08:53.126   04:59:07 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:53.126   04:59:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:53.126   04:59:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.126   04:59:07 env -- common/autotest_common.sh@10 -- # set +x
00:08:53.126  ************************************
00:08:53.126  START TEST env_memory
00:08:53.126  ************************************
00:08:53.126   04:59:07 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:08:53.126  
00:08:53.126  
00:08:53.126       CUnit - A unit testing framework for C - Version 2.1-3
00:08:53.126       http://cunit.sourceforge.net/
00:08:53.126  
00:08:53.126  
00:08:53.126  Suite: memory
00:08:53.385    Test: alloc and free memory map ...[2024-11-20 04:59:07.084781] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:08:53.385  passed
00:08:53.385    Test: mem map translation ...[2024-11-20 04:59:07.133540] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:08:53.385  [2024-11-20 04:59:07.133656] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:08:53.385  [2024-11-20 04:59:07.133768] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:08:53.385  [2024-11-20 04:59:07.133846] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:08:53.385  passed
00:08:53.385    Test: mem map registration ...[2024-11-20 04:59:07.219766] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:08:53.385  [2024-11-20 04:59:07.219836] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:08:53.385  passed
00:08:53.385    Test: mem map adjacent registrations ...passed
00:08:53.385  
00:08:53.385  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:53.385                suites      1      1    n/a      0        0
00:08:53.385                 tests      4      4      4      0        0
00:08:53.385               asserts    152    152    152      0      n/a
00:08:53.385  
00:08:53.386  Elapsed time =    0.296 seconds
00:08:53.644  
00:08:53.644  real	0m0.327s
00:08:53.644  user	0m0.312s
00:08:53.644  sys	0m0.016s
00:08:53.644  ************************************
00:08:53.644  END TEST env_memory
00:08:53.644  ************************************
00:08:53.644   04:59:07 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:53.644   04:59:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:08:53.644   04:59:07 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:53.644   04:59:07 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:53.644   04:59:07 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:53.644   04:59:07 env -- common/autotest_common.sh@10 -- # set +x
00:08:53.644  ************************************
00:08:53.644  START TEST env_vtophys
00:08:53.644  ************************************
00:08:53.644   04:59:07 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:08:53.644  EAL: lib.eal log level changed from notice to debug
00:08:53.644  EAL: Detected lcore 0 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 1 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 2 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 3 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 4 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 5 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 6 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 7 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 8 as core 0 on socket 0
00:08:53.644  EAL: Detected lcore 9 as core 0 on socket 0
00:08:53.644  EAL: Maximum logical cores by configuration: 128
00:08:53.644  EAL: Detected CPU lcores: 10
00:08:53.644  EAL: Detected NUMA nodes: 1
00:08:53.644  EAL: Checking presence of .so 'librte_eal.so.25.0'
00:08:53.644  EAL: Checking presence of .so 'librte_eal.so.25'
00:08:53.644  EAL: Checking presence of .so 'librte_eal.so'
00:08:53.644  EAL: Detected static linkage of DPDK
00:08:53.644  EAL: No shared files mode enabled, IPC will be disabled
00:08:53.644  EAL: Selected IOVA mode 'PA'
00:08:53.644  EAL: Probing VFIO support...
00:08:53.644  EAL: No shared files mode enabled, IPC is disabled
00:08:53.644  EAL: IOMMU type 1 (Type 1) is supported
00:08:53.644  EAL: IOMMU type 7 (sPAPR) is not supported
00:08:53.644  EAL: IOMMU type 8 (No-IOMMU) is not supported
00:08:53.644  EAL: VFIO support initialized
00:08:53.644  EAL: Ask a virtual area of 0x2e000 bytes
00:08:53.644  EAL: Virtual area found at 0x200000000000 (size = 0x2e000)
00:08:53.644  EAL: Setting up physically contiguous memory...
00:08:53.644  EAL: Setting maximum number of open files to 1048576
00:08:53.644  EAL: Detected memory type: socket_id:0 hugepage_sz:2097152
00:08:53.644  EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152
00:08:53.644  EAL: Ask a virtual area of 0x61000 bytes
00:08:53.644  EAL: Virtual area found at 0x20000002e000 (size = 0x61000)
00:08:53.644  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:53.644  EAL: Ask a virtual area of 0x400000000 bytes
00:08:53.644  EAL: Virtual area found at 0x200000200000 (size = 0x400000000)
00:08:53.644  EAL: VA reserved for memseg list at 0x200000200000, size 400000000
00:08:53.644  EAL: Ask a virtual area of 0x61000 bytes
00:08:53.644  EAL: Virtual area found at 0x200400200000 (size = 0x61000)
00:08:53.644  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:53.644  EAL: Ask a virtual area of 0x400000000 bytes
00:08:53.644  EAL: Virtual area found at 0x200400400000 (size = 0x400000000)
00:08:53.645  EAL: VA reserved for memseg list at 0x200400400000, size 400000000
00:08:53.645  EAL: Ask a virtual area of 0x61000 bytes
00:08:53.645  EAL: Virtual area found at 0x200800400000 (size = 0x61000)
00:08:53.645  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:53.645  EAL: Ask a virtual area of 0x400000000 bytes
00:08:53.645  EAL: Virtual area found at 0x200800600000 (size = 0x400000000)
00:08:53.645  EAL: VA reserved for memseg list at 0x200800600000, size 400000000
00:08:53.645  EAL: Ask a virtual area of 0x61000 bytes
00:08:53.645  EAL: Virtual area found at 0x200c00600000 (size = 0x61000)
00:08:53.645  EAL: Memseg list allocated at socket 0, page size 0x800kB
00:08:53.645  EAL: Ask a virtual area of 0x400000000 bytes
00:08:53.645  EAL: Virtual area found at 0x200c00800000 (size = 0x400000000)
00:08:53.645  EAL: VA reserved for memseg list at 0x200c00800000, size 400000000
00:08:53.645  EAL: Hugepages will be freed exactly as allocated.
00:08:53.645  EAL: No shared files mode enabled, IPC is disabled
00:08:53.645  EAL: No shared files mode enabled, IPC is disabled
00:08:53.645  EAL: TSC frequency is ~2200000 KHz
00:08:53.645  EAL: Main lcore 0 is ready (tid=7fc7ba0eea80;cpuset=[0])
00:08:53.645  EAL: Trying to obtain current memory policy.
00:08:53.645  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:53.645  EAL: Restoring previous memory policy: 0
00:08:53.645  EAL: request: mp_malloc_sync
00:08:53.645  EAL: No shared files mode enabled, IPC is disabled
00:08:53.645  EAL: Heap on socket 0 was expanded by 2MB
00:08:53.645  EAL: Allocated 2112 bytes of per-lcore data with a 64-byte alignment
00:08:53.645  EAL: Mem event callback 'spdk:(nil)' registered
00:08:53.903  
00:08:53.903  
00:08:53.903       CUnit - A unit testing framework for C - Version 2.1-3
00:08:53.903       http://cunit.sourceforge.net/
00:08:53.903  
00:08:53.903  
00:08:53.903  Suite: components_suite
00:08:54.162    Test: vtophys_malloc_test ...passed
00:08:54.162    Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 4MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 4MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 6MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 6MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 10MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 10MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 18MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 18MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 34MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 34MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 66MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was shrunk by 66MB
00:08:54.162  EAL: Trying to obtain current memory policy.
00:08:54.162  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.162  EAL: Restoring previous memory policy: 0
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.162  EAL: request: mp_malloc_sync
00:08:54.162  EAL: No shared files mode enabled, IPC is disabled
00:08:54.162  EAL: Heap on socket 0 was expanded by 130MB
00:08:54.162  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.421  EAL: request: mp_malloc_sync
00:08:54.421  EAL: No shared files mode enabled, IPC is disabled
00:08:54.421  EAL: Heap on socket 0 was shrunk by 130MB
00:08:54.421  EAL: Trying to obtain current memory policy.
00:08:54.421  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.421  EAL: Restoring previous memory policy: 0
00:08:54.421  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.421  EAL: request: mp_malloc_sync
00:08:54.421  EAL: No shared files mode enabled, IPC is disabled
00:08:54.421  EAL: Heap on socket 0 was expanded by 258MB
00:08:54.421  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.421  EAL: request: mp_malloc_sync
00:08:54.421  EAL: No shared files mode enabled, IPC is disabled
00:08:54.421  EAL: Heap on socket 0 was shrunk by 258MB
00:08:54.421  EAL: Trying to obtain current memory policy.
00:08:54.421  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.680  EAL: Restoring previous memory policy: 0
00:08:54.680  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.680  EAL: request: mp_malloc_sync
00:08:54.680  EAL: No shared files mode enabled, IPC is disabled
00:08:54.680  EAL: Heap on socket 0 was expanded by 514MB
00:08:54.680  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.680  EAL: request: mp_malloc_sync
00:08:54.680  EAL: No shared files mode enabled, IPC is disabled
00:08:54.680  EAL: Heap on socket 0 was shrunk by 514MB
00:08:54.680  EAL: Trying to obtain current memory policy.
00:08:54.680  EAL: Setting policy MPOL_PREFERRED for socket 0
00:08:54.939  EAL: Restoring previous memory policy: 0
00:08:54.939  EAL: Calling mem event callback 'spdk:(nil)'
00:08:54.939  EAL: request: mp_malloc_sync
00:08:54.939  EAL: No shared files mode enabled, IPC is disabled
00:08:54.939  EAL: Heap on socket 0 was expanded by 1026MB
00:08:55.197  EAL: Calling mem event callback 'spdk:(nil)'
00:08:55.454  passed
00:08:55.454  
00:08:55.454  EAL: request: mp_malloc_sync
00:08:55.454  EAL: No shared files mode enabled, IPC is disabled
00:08:55.454  EAL: Heap on socket 0 was shrunk by 1026MB
00:08:55.454  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:55.454                suites      1      1    n/a      0        0
00:08:55.454                 tests      2      2      2      0        0
00:08:55.454               asserts   6289   6289   6289      0      n/a
00:08:55.454  
00:08:55.454  Elapsed time =    1.599 seconds
00:08:55.454  EAL: Calling mem event callback 'spdk:(nil)'
00:08:55.454  EAL: request: mp_malloc_sync
00:08:55.454  EAL: No shared files mode enabled, IPC is disabled
00:08:55.454  EAL: Heap on socket 0 was shrunk by 2MB
00:08:55.454  EAL: No shared files mode enabled, IPC is disabled
00:08:55.454  EAL: No shared files mode enabled, IPC is disabled
00:08:55.454  EAL: No shared files mode enabled, IPC is disabled
00:08:55.454  
00:08:55.454  real	0m1.891s
00:08:55.454  user	0m0.941s
00:08:55.454  sys	0m0.814s
00:08:55.454   04:59:09 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.454   04:59:09 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:08:55.454  ************************************
00:08:55.454  END TEST env_vtophys
00:08:55.454  ************************************
00:08:55.454   04:59:09 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:55.454   04:59:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.454   04:59:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.454   04:59:09 env -- common/autotest_common.sh@10 -- # set +x
00:08:55.454  ************************************
00:08:55.454  START TEST env_pci
00:08:55.454  ************************************
00:08:55.454   04:59:09 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:08:55.454  
00:08:55.454  
00:08:55.454       CUnit - A unit testing framework for C - Version 2.1-3
00:08:55.454       http://cunit.sourceforge.net/
00:08:55.454  
00:08:55.454  
00:08:55.454  Suite: pci
00:08:55.454    Test: pci_hook ...[2024-11-20 04:59:09.378113] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 123358 has claimed it
00:08:55.713  passed
00:08:55.713  
00:08:55.713  EAL: Cannot find device (10000:00:01.0)
00:08:55.713  EAL: Failed to attach device on primary process
00:08:55.713  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:55.713                suites      1      1    n/a      0        0
00:08:55.713                 tests      1      1      1      0        0
00:08:55.713               asserts     25     25     25      0      n/a
00:08:55.713  
00:08:55.713  Elapsed time =    0.006 seconds
00:08:55.713  
00:08:55.713  real	0m0.092s
00:08:55.713  user	0m0.044s
00:08:55.713  sys	0m0.047s
00:08:55.713   04:59:09 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.713   04:59:09 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:08:55.713  ************************************
00:08:55.713  END TEST env_pci
00:08:55.713  ************************************
00:08:55.713   04:59:09 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:08:55.713    04:59:09 env -- env/env.sh@15 -- # uname
00:08:55.713   04:59:09 env -- env/env.sh@15 -- # '[' Linux = Linux ']'
00:08:55.713   04:59:09 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000
00:08:55.713   04:59:09 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:55.713   04:59:09 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:08:55.713   04:59:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.713   04:59:09 env -- common/autotest_common.sh@10 -- # set +x
00:08:55.713  ************************************
00:08:55.713  START TEST env_dpdk_post_init
00:08:55.713  ************************************
00:08:55.713   04:59:09 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000
00:08:55.713  EAL: Detected CPU lcores: 10
00:08:55.713  EAL: Detected NUMA nodes: 1
00:08:55.713  EAL: Detected static linkage of DPDK
00:08:55.713  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:55.713  EAL: Selected IOVA mode 'PA'
00:08:55.713  EAL: VFIO support initialized
00:08:55.972  Starting DPDK initialization...
00:08:55.972  Starting SPDK post initialization...
00:08:55.972  SPDK NVMe probe
00:08:55.972  Attaching to 0000:00:10.0
00:08:55.972  Attached to 0000:00:10.0
00:08:55.972  Cleaning up...
00:08:55.972  
00:08:55.972  real	0m0.273s
00:08:55.972  user	0m0.088s
00:08:55.972  sys	0m0.087s
00:08:55.972   04:59:09 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:55.972   04:59:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:08:55.972  ************************************
00:08:55.972  END TEST env_dpdk_post_init
00:08:55.972  ************************************
00:08:55.972    04:59:09 env -- env/env.sh@26 -- # uname
00:08:55.972   04:59:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']'
00:08:55.972   04:59:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:55.972   04:59:09 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:55.972   04:59:09 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:55.972   04:59:09 env -- common/autotest_common.sh@10 -- # set +x
00:08:55.972  ************************************
00:08:55.972  START TEST env_mem_callbacks
00:08:55.972  ************************************
00:08:55.972   04:59:09 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks
00:08:55.972  EAL: Detected CPU lcores: 10
00:08:55.972  EAL: Detected NUMA nodes: 1
00:08:55.972  EAL: Detected static linkage of DPDK
00:08:55.972  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:08:55.973  EAL: Selected IOVA mode 'PA'
00:08:55.973  EAL: VFIO support initialized
00:08:56.231  
00:08:56.231  
00:08:56.231       CUnit - A unit testing framework for C - Version 2.1-3
00:08:56.231       http://cunit.sourceforge.net/
00:08:56.231  
00:08:56.231  
00:08:56.231  Suite: memory
00:08:56.231    Test: test ...
00:08:56.231  register 0x200000200000 2097152
00:08:56.231  malloc 3145728
00:08:56.231  register 0x200000400000 4194304
00:08:56.231  buf 0x200000500000 len 3145728 PASSED
00:08:56.231  malloc 64
00:08:56.231  buf 0x2000004fff40 len 64 PASSED
00:08:56.231  malloc 4194304
00:08:56.231  register 0x200000800000 6291456
00:08:56.231  buf 0x200000a00000 len 4194304 PASSED
00:08:56.231  free 0x200000500000 3145728
00:08:56.231  free 0x2000004fff40 64
00:08:56.231  unregister 0x200000400000 4194304 PASSED
00:08:56.231  free 0x200000a00000 4194304
00:08:56.231  unregister 0x200000800000 6291456 PASSED
00:08:56.231  malloc 8388608
00:08:56.231  register 0x200000400000 10485760
00:08:56.231  buf 0x200000600000 len 8388608 PASSED
00:08:56.231  free 0x200000600000 8388608
00:08:56.231  unregister 0x200000400000 10485760 PASSED
00:08:56.231  passed
00:08:56.231  
00:08:56.231  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:08:56.231                suites      1      1    n/a      0        0
00:08:56.231                 tests      1      1      1      0        0
00:08:56.231               asserts     15     15     15      0      n/a
00:08:56.231  
00:08:56.231  Elapsed time =    0.007 seconds
00:08:56.231  
00:08:56.231  real	0m0.223s
00:08:56.231  user	0m0.071s
00:08:56.231  sys	0m0.051s
00:08:56.231   04:59:10 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.231   04:59:10 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x
00:08:56.231  ************************************
00:08:56.231  END TEST env_mem_callbacks
00:08:56.231  ************************************
00:08:56.231  ************************************
00:08:56.231  END TEST env
00:08:56.231  ************************************
00:08:56.231  
00:08:56.231  real	0m3.252s
00:08:56.231  user	0m1.723s
00:08:56.231  sys	0m1.195s
00:08:56.231   04:59:10 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:56.231   04:59:10 env -- common/autotest_common.sh@10 -- # set +x
00:08:56.231   04:59:10  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:56.231   04:59:10  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:56.231   04:59:10  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:56.231   04:59:10  -- common/autotest_common.sh@10 -- # set +x
00:08:56.231  ************************************
00:08:56.231  START TEST rpc
00:08:56.231  ************************************
00:08:56.231   04:59:10 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:08:56.490  * Looking for test storage...
00:08:56.490  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:56.490    04:59:10 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:56.490     04:59:10 rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:56.490     04:59:10 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:56.490    04:59:10 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:56.490    04:59:10 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:56.490    04:59:10 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:56.490    04:59:10 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:56.490    04:59:10 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:56.490    04:59:10 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:56.490    04:59:10 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:56.490    04:59:10 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:56.490    04:59:10 rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:56.490    04:59:10 rpc -- scripts/common.sh@345 -- # : 1
00:08:56.490    04:59:10 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:56.490    04:59:10 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:56.490     04:59:10 rpc -- scripts/common.sh@365 -- # decimal 1
00:08:56.490     04:59:10 rpc -- scripts/common.sh@353 -- # local d=1
00:08:56.490     04:59:10 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:56.490     04:59:10 rpc -- scripts/common.sh@355 -- # echo 1
00:08:56.490    04:59:10 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:56.490     04:59:10 rpc -- scripts/common.sh@366 -- # decimal 2
00:08:56.490     04:59:10 rpc -- scripts/common.sh@353 -- # local d=2
00:08:56.490     04:59:10 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:56.490     04:59:10 rpc -- scripts/common.sh@355 -- # echo 2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:56.490    04:59:10 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:56.490    04:59:10 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:56.490    04:59:10 rpc -- scripts/common.sh@368 -- # return 0
00:08:56.490    04:59:10 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:56.490    04:59:10 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:56.490  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.490  		--rc genhtml_branch_coverage=1
00:08:56.490  		--rc genhtml_function_coverage=1
00:08:56.490  		--rc genhtml_legend=1
00:08:56.490  		--rc geninfo_all_blocks=1
00:08:56.490  		--rc geninfo_unexecuted_blocks=1
00:08:56.490  		
00:08:56.490  		'
00:08:56.490    04:59:10 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:56.491  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.491  		--rc genhtml_branch_coverage=1
00:08:56.491  		--rc genhtml_function_coverage=1
00:08:56.491  		--rc genhtml_legend=1
00:08:56.491  		--rc geninfo_all_blocks=1
00:08:56.491  		--rc geninfo_unexecuted_blocks=1
00:08:56.491  		
00:08:56.491  		'
00:08:56.491    04:59:10 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:56.491  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.491  		--rc genhtml_branch_coverage=1
00:08:56.491  		--rc genhtml_function_coverage=1
00:08:56.491  		--rc genhtml_legend=1
00:08:56.491  		--rc geninfo_all_blocks=1
00:08:56.491  		--rc geninfo_unexecuted_blocks=1
00:08:56.491  		
00:08:56.491  		'
00:08:56.491    04:59:10 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:56.491  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:56.491  		--rc genhtml_branch_coverage=1
00:08:56.491  		--rc genhtml_function_coverage=1
00:08:56.491  		--rc genhtml_legend=1
00:08:56.491  		--rc geninfo_all_blocks=1
00:08:56.491  		--rc geninfo_unexecuted_blocks=1
00:08:56.491  		
00:08:56.491  		'
00:08:56.491   04:59:10 rpc -- rpc/rpc.sh@65 -- # spdk_pid=123498
00:08:56.491   04:59:10 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:56.491   04:59:10 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:08:56.491   04:59:10 rpc -- rpc/rpc.sh@67 -- # waitforlisten 123498
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@835 -- # '[' -z 123498 ']'
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:08:56.491  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:08:56.491   04:59:10 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:56.491  [2024-11-20 04:59:10.404648] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:08:56.491  [2024-11-20 04:59:10.405133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123498 ]
00:08:56.749  [2024-11-20 04:59:10.542909] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:08:56.749  [2024-11-20 04:59:10.569194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:08:56.749  [2024-11-20 04:59:10.604637] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:08:56.749  [2024-11-20 04:59:10.605033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 123498' to capture a snapshot of events at runtime.
00:08:56.750  [2024-11-20 04:59:10.605218] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only
00:08:56.750  [2024-11-20 04:59:10.605424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running.
00:08:56.750  [2024-11-20 04:59:10.605594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid123498 for offline analysis/debug.
00:08:56.750  [2024-11-20 04:59:10.606256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:08:57.687   04:59:11 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:08:57.687   04:59:11 rpc -- common/autotest_common.sh@868 -- # return 0
00:08:57.687   04:59:11 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:57.687   04:59:11 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:08:57.687   04:59:11 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:08:57.687   04:59:11 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:08:57.687   04:59:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.687   04:59:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.687   04:59:11 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:57.687  ************************************
00:08:57.687  START TEST rpc_integrity
00:08:57.687  ************************************
00:08:57.687   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:57.687  {
00:08:57.687  "name": "Malloc0",
00:08:57.687  "aliases": [
00:08:57.687  "16cbaf2c-e10b-487c-97cb-9b33ba8bc1f3"
00:08:57.687  ],
00:08:57.687  "product_name": "Malloc disk",
00:08:57.687  "block_size": 512,
00:08:57.687  "num_blocks": 16384,
00:08:57.687  "uuid": "16cbaf2c-e10b-487c-97cb-9b33ba8bc1f3",
00:08:57.687  "assigned_rate_limits": {
00:08:57.687  "rw_ios_per_sec": 0,
00:08:57.687  "rw_mbytes_per_sec": 0,
00:08:57.687  "r_mbytes_per_sec": 0,
00:08:57.687  "w_mbytes_per_sec": 0
00:08:57.687  },
00:08:57.687  "claimed": false,
00:08:57.687  "zoned": false,
00:08:57.687  "supported_io_types": {
00:08:57.687  "read": true,
00:08:57.687  "write": true,
00:08:57.687  "unmap": true,
00:08:57.687  "flush": true,
00:08:57.687  "reset": true,
00:08:57.687  "nvme_admin": false,
00:08:57.687  "nvme_io": false,
00:08:57.687  "nvme_io_md": false,
00:08:57.687  "write_zeroes": true,
00:08:57.687  "zcopy": true,
00:08:57.687  "get_zone_info": false,
00:08:57.687  "zone_management": false,
00:08:57.687  "zone_append": false,
00:08:57.687  "compare": false,
00:08:57.687  "compare_and_write": false,
00:08:57.687  "abort": true,
00:08:57.687  "seek_hole": false,
00:08:57.687  "seek_data": false,
00:08:57.687  "copy": true,
00:08:57.687  "nvme_iov_md": false
00:08:57.687  },
00:08:57.687  "memory_domains": [
00:08:57.687  {
00:08:57.687  "dma_device_id": "system",
00:08:57.687  "dma_device_type": 1
00:08:57.687  },
00:08:57.687  {
00:08:57.687  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.687  "dma_device_type": 2
00:08:57.687  }
00:08:57.687  ],
00:08:57.687  "driver_specific": {}
00:08:57.687  }
00:08:57.687  ]'
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:08:57.687   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.687   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.687  [2024-11-20 04:59:11.539899] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:08:57.687  [2024-11-20 04:59:11.540032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:57.687  [2024-11-20 04:59:11.540075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280
00:08:57.687  [2024-11-20 04:59:11.540111] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:57.687  [2024-11-20 04:59:11.542811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:57.687  [2024-11-20 04:59:11.542902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:57.687  Passthru0
00:08:57.687   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.687    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.687    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.687   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:57.687  {
00:08:57.687  "name": "Malloc0",
00:08:57.687  "aliases": [
00:08:57.687  "16cbaf2c-e10b-487c-97cb-9b33ba8bc1f3"
00:08:57.687  ],
00:08:57.687  "product_name": "Malloc disk",
00:08:57.687  "block_size": 512,
00:08:57.687  "num_blocks": 16384,
00:08:57.687  "uuid": "16cbaf2c-e10b-487c-97cb-9b33ba8bc1f3",
00:08:57.687  "assigned_rate_limits": {
00:08:57.688  "rw_ios_per_sec": 0,
00:08:57.688  "rw_mbytes_per_sec": 0,
00:08:57.688  "r_mbytes_per_sec": 0,
00:08:57.688  "w_mbytes_per_sec": 0
00:08:57.688  },
00:08:57.688  "claimed": true,
00:08:57.688  "claim_type": "exclusive_write",
00:08:57.688  "zoned": false,
00:08:57.688  "supported_io_types": {
00:08:57.688  "read": true,
00:08:57.688  "write": true,
00:08:57.688  "unmap": true,
00:08:57.688  "flush": true,
00:08:57.688  "reset": true,
00:08:57.688  "nvme_admin": false,
00:08:57.688  "nvme_io": false,
00:08:57.688  "nvme_io_md": false,
00:08:57.688  "write_zeroes": true,
00:08:57.688  "zcopy": true,
00:08:57.688  "get_zone_info": false,
00:08:57.688  "zone_management": false,
00:08:57.688  "zone_append": false,
00:08:57.688  "compare": false,
00:08:57.688  "compare_and_write": false,
00:08:57.688  "abort": true,
00:08:57.688  "seek_hole": false,
00:08:57.688  "seek_data": false,
00:08:57.688  "copy": true,
00:08:57.688  "nvme_iov_md": false
00:08:57.688  },
00:08:57.688  "memory_domains": [
00:08:57.688  {
00:08:57.688  "dma_device_id": "system",
00:08:57.688  "dma_device_type": 1
00:08:57.688  },
00:08:57.688  {
00:08:57.688  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.688  "dma_device_type": 2
00:08:57.688  }
00:08:57.688  ],
00:08:57.688  "driver_specific": {}
00:08:57.688  },
00:08:57.688  {
00:08:57.688  "name": "Passthru0",
00:08:57.688  "aliases": [
00:08:57.688  "3a26fd70-8433-5ee3-aa28-e43ee59ac009"
00:08:57.688  ],
00:08:57.688  "product_name": "passthru",
00:08:57.688  "block_size": 512,
00:08:57.688  "num_blocks": 16384,
00:08:57.688  "uuid": "3a26fd70-8433-5ee3-aa28-e43ee59ac009",
00:08:57.688  "assigned_rate_limits": {
00:08:57.688  "rw_ios_per_sec": 0,
00:08:57.688  "rw_mbytes_per_sec": 0,
00:08:57.688  "r_mbytes_per_sec": 0,
00:08:57.688  "w_mbytes_per_sec": 0
00:08:57.688  },
00:08:57.688  "claimed": false,
00:08:57.688  "zoned": false,
00:08:57.688  "supported_io_types": {
00:08:57.688  "read": true,
00:08:57.688  "write": true,
00:08:57.688  "unmap": true,
00:08:57.688  "flush": true,
00:08:57.688  "reset": true,
00:08:57.688  "nvme_admin": false,
00:08:57.688  "nvme_io": false,
00:08:57.688  "nvme_io_md": false,
00:08:57.688  "write_zeroes": true,
00:08:57.688  "zcopy": true,
00:08:57.688  "get_zone_info": false,
00:08:57.688  "zone_management": false,
00:08:57.688  "zone_append": false,
00:08:57.688  "compare": false,
00:08:57.688  "compare_and_write": false,
00:08:57.688  "abort": true,
00:08:57.688  "seek_hole": false,
00:08:57.688  "seek_data": false,
00:08:57.688  "copy": true,
00:08:57.688  "nvme_iov_md": false
00:08:57.688  },
00:08:57.688  "memory_domains": [
00:08:57.688  {
00:08:57.688  "dma_device_id": "system",
00:08:57.688  "dma_device_type": 1
00:08:57.688  },
00:08:57.688  {
00:08:57.688  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.688  "dma_device_type": 2
00:08:57.688  }
00:08:57.688  ],
00:08:57.688  "driver_specific": {
00:08:57.688  "passthru": {
00:08:57.688  "name": "Passthru0",
00:08:57.688  "base_bdev_name": "Malloc0"
00:08:57.688  }
00:08:57.688  }
00:08:57.688  }
00:08:57.688  ]'
00:08:57.688    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:57.688   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:57.688   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.688   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.688   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.688    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:57.688    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.688    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.688    04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.688   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:57.688    04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:57.947   04:59:11 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:57.947  
00:08:57.947  real	0m0.299s
00:08:57.947  user	0m0.208s
00:08:57.947  sys	0m0.021s
00:08:57.947   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:57.947   04:59:11 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:57.947  ************************************
00:08:57.947  END TEST rpc_integrity
00:08:57.947  ************************************
00:08:57.947   04:59:11 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:08:57.947   04:59:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:57.947   04:59:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:57.947   04:59:11 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:57.947  ************************************
00:08:57.947  START TEST rpc_plugins
00:08:57.947  ************************************
00:08:57.947   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:08:57.947    04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.947   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:08:57.947    04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.947   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:08:57.947  {
00:08:57.947  "name": "Malloc1",
00:08:57.947  "aliases": [
00:08:57.947  "1b0400a7-e58a-4a64-9668-90bc2c18d4fb"
00:08:57.947  ],
00:08:57.947  "product_name": "Malloc disk",
00:08:57.947  "block_size": 4096,
00:08:57.947  "num_blocks": 256,
00:08:57.947  "uuid": "1b0400a7-e58a-4a64-9668-90bc2c18d4fb",
00:08:57.947  "assigned_rate_limits": {
00:08:57.947  "rw_ios_per_sec": 0,
00:08:57.947  "rw_mbytes_per_sec": 0,
00:08:57.947  "r_mbytes_per_sec": 0,
00:08:57.947  "w_mbytes_per_sec": 0
00:08:57.947  },
00:08:57.947  "claimed": false,
00:08:57.947  "zoned": false,
00:08:57.947  "supported_io_types": {
00:08:57.947  "read": true,
00:08:57.947  "write": true,
00:08:57.947  "unmap": true,
00:08:57.947  "flush": true,
00:08:57.947  "reset": true,
00:08:57.947  "nvme_admin": false,
00:08:57.947  "nvme_io": false,
00:08:57.947  "nvme_io_md": false,
00:08:57.947  "write_zeroes": true,
00:08:57.947  "zcopy": true,
00:08:57.947  "get_zone_info": false,
00:08:57.947  "zone_management": false,
00:08:57.947  "zone_append": false,
00:08:57.947  "compare": false,
00:08:57.947  "compare_and_write": false,
00:08:57.947  "abort": true,
00:08:57.947  "seek_hole": false,
00:08:57.947  "seek_data": false,
00:08:57.947  "copy": true,
00:08:57.947  "nvme_iov_md": false
00:08:57.947  },
00:08:57.947  "memory_domains": [
00:08:57.947  {
00:08:57.947  "dma_device_id": "system",
00:08:57.947  "dma_device_type": 1
00:08:57.947  },
00:08:57.947  {
00:08:57.947  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:57.947  "dma_device_type": 2
00:08:57.947  }
00:08:57.947  ],
00:08:57.947  "driver_specific": {}
00:08:57.947  }
00:08:57.947  ]'
00:08:57.947    04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:08:57.947   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:08:57.947   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:08:57.947   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.947   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.947   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.947    04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:57.947    04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:57.947   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:08:57.947    04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:08:58.206  ************************************
00:08:58.206  END TEST rpc_plugins
00:08:58.206  ************************************
00:08:58.206   04:59:11 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:08:58.206  
00:08:58.206  real	0m0.162s
00:08:58.206  user	0m0.107s
00:08:58.206  sys	0m0.015s
00:08:58.206   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:58.206   04:59:11 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:08:58.206   04:59:11 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:08:58.206   04:59:11 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:58.206   04:59:11 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:58.206   04:59:11 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:58.206  ************************************
00:08:58.206  START TEST rpc_trace_cmd_test
00:08:58.206  ************************************
00:08:58.206   04:59:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:08:58.206   04:59:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:08:58.206    04:59:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:08:58.206    04:59:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.206    04:59:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.206    04:59:11 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.206   04:59:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:08:58.206  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid123498",
00:08:58.206  "tpoint_group_mask": "0x8",
00:08:58.206  "iscsi_conn": {
00:08:58.206  "mask": "0x2",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "scsi": {
00:08:58.206  "mask": "0x4",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "bdev": {
00:08:58.206  "mask": "0x8",
00:08:58.206  "tpoint_mask": "0xffffffffffffffff"
00:08:58.206  },
00:08:58.206  "nvmf_rdma": {
00:08:58.206  "mask": "0x10",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "nvmf_tcp": {
00:08:58.206  "mask": "0x20",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "ftl": {
00:08:58.206  "mask": "0x40",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "blobfs": {
00:08:58.206  "mask": "0x80",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "dsa": {
00:08:58.206  "mask": "0x200",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "thread": {
00:08:58.206  "mask": "0x400",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "nvme_pcie": {
00:08:58.206  "mask": "0x800",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "iaa": {
00:08:58.206  "mask": "0x1000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "nvme_tcp": {
00:08:58.206  "mask": "0x2000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "bdev_nvme": {
00:08:58.206  "mask": "0x4000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "sock": {
00:08:58.206  "mask": "0x8000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "blob": {
00:08:58.206  "mask": "0x10000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "bdev_raid": {
00:08:58.206  "mask": "0x20000",
00:08:58.206  "tpoint_mask": "0x0"
00:08:58.206  },
00:08:58.206  "scheduler": {
00:08:58.206  "mask": "0x40000",
00:08:58.207  "tpoint_mask": "0x0"
00:08:58.207  }
00:08:58.207  }'
00:08:58.207    04:59:11 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:08:58.207   04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']'
00:08:58.207    04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:08:58.207   04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:08:58.207    04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:08:58.207   04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:08:58.207    04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:08:58.466   04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:08:58.466    04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:08:58.466   04:59:12 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:08:58.466  
00:08:58.466  real	0m0.273s
00:08:58.466  user	0m0.235s
00:08:58.466  sys	0m0.031s
00:08:58.466   04:59:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:58.466   04:59:12 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:08:58.466  ************************************
00:08:58.466  END TEST rpc_trace_cmd_test
00:08:58.466  ************************************
00:08:58.466   04:59:12 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:08:58.466   04:59:12 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:08:58.466   04:59:12 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:08:58.466   04:59:12 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:58.466   04:59:12 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:58.466   04:59:12 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:58.466  ************************************
00:08:58.466  START TEST rpc_daemon_integrity
00:08:58.466  ************************************
00:08:58.466   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.466   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:08:58.466   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.466   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.466   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:08:58.466  {
00:08:58.466  "name": "Malloc2",
00:08:58.466  "aliases": [
00:08:58.466  "86a5dd10-79d3-4882-b2a1-d511ba4b274f"
00:08:58.466  ],
00:08:58.466  "product_name": "Malloc disk",
00:08:58.466  "block_size": 512,
00:08:58.466  "num_blocks": 16384,
00:08:58.466  "uuid": "86a5dd10-79d3-4882-b2a1-d511ba4b274f",
00:08:58.466  "assigned_rate_limits": {
00:08:58.466  "rw_ios_per_sec": 0,
00:08:58.466  "rw_mbytes_per_sec": 0,
00:08:58.466  "r_mbytes_per_sec": 0,
00:08:58.466  "w_mbytes_per_sec": 0
00:08:58.466  },
00:08:58.466  "claimed": false,
00:08:58.466  "zoned": false,
00:08:58.466  "supported_io_types": {
00:08:58.466  "read": true,
00:08:58.466  "write": true,
00:08:58.466  "unmap": true,
00:08:58.466  "flush": true,
00:08:58.466  "reset": true,
00:08:58.466  "nvme_admin": false,
00:08:58.466  "nvme_io": false,
00:08:58.466  "nvme_io_md": false,
00:08:58.466  "write_zeroes": true,
00:08:58.466  "zcopy": true,
00:08:58.466  "get_zone_info": false,
00:08:58.466  "zone_management": false,
00:08:58.466  "zone_append": false,
00:08:58.466  "compare": false,
00:08:58.466  "compare_and_write": false,
00:08:58.466  "abort": true,
00:08:58.466  "seek_hole": false,
00:08:58.466  "seek_data": false,
00:08:58.466  "copy": true,
00:08:58.466  "nvme_iov_md": false
00:08:58.466  },
00:08:58.466  "memory_domains": [
00:08:58.466  {
00:08:58.466  "dma_device_id": "system",
00:08:58.466  "dma_device_type": 1
00:08:58.466  },
00:08:58.466  {
00:08:58.466  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:58.466  "dma_device_type": 2
00:08:58.466  }
00:08:58.466  ],
00:08:58.466  "driver_specific": {}
00:08:58.466  }
00:08:58.466  ]'
00:08:58.466    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725  [2024-11-20 04:59:12.450782] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:08:58.725  [2024-11-20 04:59:12.450857] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:08:58.725  [2024-11-20 04:59:12.450901] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480
00:08:58.725  [2024-11-20 04:59:12.450926] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:08:58.725  [2024-11-20 04:59:12.453742] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:08:58.725  [2024-11-20 04:59:12.453827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:08:58.725  Passthru0
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:08:58.725  {
00:08:58.725  "name": "Malloc2",
00:08:58.725  "aliases": [
00:08:58.725  "86a5dd10-79d3-4882-b2a1-d511ba4b274f"
00:08:58.725  ],
00:08:58.725  "product_name": "Malloc disk",
00:08:58.725  "block_size": 512,
00:08:58.725  "num_blocks": 16384,
00:08:58.725  "uuid": "86a5dd10-79d3-4882-b2a1-d511ba4b274f",
00:08:58.725  "assigned_rate_limits": {
00:08:58.725  "rw_ios_per_sec": 0,
00:08:58.725  "rw_mbytes_per_sec": 0,
00:08:58.725  "r_mbytes_per_sec": 0,
00:08:58.725  "w_mbytes_per_sec": 0
00:08:58.725  },
00:08:58.725  "claimed": true,
00:08:58.725  "claim_type": "exclusive_write",
00:08:58.725  "zoned": false,
00:08:58.725  "supported_io_types": {
00:08:58.725  "read": true,
00:08:58.725  "write": true,
00:08:58.725  "unmap": true,
00:08:58.725  "flush": true,
00:08:58.725  "reset": true,
00:08:58.725  "nvme_admin": false,
00:08:58.725  "nvme_io": false,
00:08:58.725  "nvme_io_md": false,
00:08:58.725  "write_zeroes": true,
00:08:58.725  "zcopy": true,
00:08:58.725  "get_zone_info": false,
00:08:58.725  "zone_management": false,
00:08:58.725  "zone_append": false,
00:08:58.725  "compare": false,
00:08:58.725  "compare_and_write": false,
00:08:58.725  "abort": true,
00:08:58.725  "seek_hole": false,
00:08:58.725  "seek_data": false,
00:08:58.725  "copy": true,
00:08:58.725  "nvme_iov_md": false
00:08:58.725  },
00:08:58.725  "memory_domains": [
00:08:58.725  {
00:08:58.725  "dma_device_id": "system",
00:08:58.725  "dma_device_type": 1
00:08:58.725  },
00:08:58.725  {
00:08:58.725  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:58.725  "dma_device_type": 2
00:08:58.725  }
00:08:58.725  ],
00:08:58.725  "driver_specific": {}
00:08:58.725  },
00:08:58.725  {
00:08:58.725  "name": "Passthru0",
00:08:58.725  "aliases": [
00:08:58.725  "9a867de3-ffe2-5298-8765-736ecf1cee22"
00:08:58.725  ],
00:08:58.725  "product_name": "passthru",
00:08:58.725  "block_size": 512,
00:08:58.725  "num_blocks": 16384,
00:08:58.725  "uuid": "9a867de3-ffe2-5298-8765-736ecf1cee22",
00:08:58.725  "assigned_rate_limits": {
00:08:58.725  "rw_ios_per_sec": 0,
00:08:58.725  "rw_mbytes_per_sec": 0,
00:08:58.725  "r_mbytes_per_sec": 0,
00:08:58.725  "w_mbytes_per_sec": 0
00:08:58.725  },
00:08:58.725  "claimed": false,
00:08:58.725  "zoned": false,
00:08:58.725  "supported_io_types": {
00:08:58.725  "read": true,
00:08:58.725  "write": true,
00:08:58.725  "unmap": true,
00:08:58.725  "flush": true,
00:08:58.725  "reset": true,
00:08:58.725  "nvme_admin": false,
00:08:58.725  "nvme_io": false,
00:08:58.725  "nvme_io_md": false,
00:08:58.725  "write_zeroes": true,
00:08:58.725  "zcopy": true,
00:08:58.725  "get_zone_info": false,
00:08:58.725  "zone_management": false,
00:08:58.725  "zone_append": false,
00:08:58.725  "compare": false,
00:08:58.725  "compare_and_write": false,
00:08:58.725  "abort": true,
00:08:58.725  "seek_hole": false,
00:08:58.725  "seek_data": false,
00:08:58.725  "copy": true,
00:08:58.725  "nvme_iov_md": false
00:08:58.725  },
00:08:58.725  "memory_domains": [
00:08:58.725  {
00:08:58.725  "dma_device_id": "system",
00:08:58.725  "dma_device_type": 1
00:08:58.725  },
00:08:58.725  {
00:08:58.725  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:08:58.725  "dma_device_type": 2
00:08:58.725  }
00:08:58.725  ],
00:08:58.725  "driver_specific": {
00:08:58.725  "passthru": {
00:08:58.725  "name": "Passthru0",
00:08:58.725  "base_bdev_name": "Malloc2"
00:08:58.725  }
00:08:58.725  }
00:08:58.725  }
00:08:58.725  ]'
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:08:58.725    04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:08:58.725  
00:08:58.725  real	0m0.319s
00:08:58.725  user	0m0.211s
00:08:58.725  sys	0m0.034s
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:58.725   04:59:12 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:08:58.725  ************************************
00:08:58.725  END TEST rpc_daemon_integrity
00:08:58.725  ************************************
00:08:58.725   04:59:12 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:08:58.725   04:59:12 rpc -- rpc/rpc.sh@84 -- # killprocess 123498
00:08:58.725   04:59:12 rpc -- common/autotest_common.sh@954 -- # '[' -z 123498 ']'
00:08:58.726   04:59:12 rpc -- common/autotest_common.sh@958 -- # kill -0 123498
00:08:58.726    04:59:12 rpc -- common/autotest_common.sh@959 -- # uname
00:08:58.726   04:59:12 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:08:58.726    04:59:12 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123498
00:08:58.986   04:59:12 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:08:58.986   04:59:12 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:08:58.986   04:59:12 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123498'
00:08:58.986  killing process with pid 123498
00:08:58.986   04:59:12 rpc -- common/autotest_common.sh@973 -- # kill 123498
00:08:58.986   04:59:12 rpc -- common/autotest_common.sh@978 -- # wait 123498
00:08:59.245  
00:08:59.246  real	0m3.039s
00:08:59.246  user	0m3.898s
00:08:59.246  sys	0m0.677s
00:08:59.246   04:59:13 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:08:59.246   04:59:13 rpc -- common/autotest_common.sh@10 -- # set +x
00:08:59.246  ************************************
00:08:59.246  END TEST rpc
00:08:59.246  ************************************
00:08:59.505   04:59:13  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:59.505   04:59:13  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:59.505   04:59:13  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:59.505   04:59:13  -- common/autotest_common.sh@10 -- # set +x
00:08:59.505  ************************************
00:08:59.505  START TEST skip_rpc
00:08:59.505  ************************************
00:08:59.505   04:59:13 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:08:59.505  * Looking for test storage...
00:08:59.505  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:08:59.505     04:59:13 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:08:59.505     04:59:13 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@345 -- # : 1
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:08:59.505     04:59:13 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:08:59.505    04:59:13 skip_rpc -- scripts/common.sh@368 -- # return 0
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:08:59.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:59.505  		--rc genhtml_branch_coverage=1
00:08:59.505  		--rc genhtml_function_coverage=1
00:08:59.505  		--rc genhtml_legend=1
00:08:59.505  		--rc geninfo_all_blocks=1
00:08:59.505  		--rc geninfo_unexecuted_blocks=1
00:08:59.505  		
00:08:59.505  		'
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:08:59.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:59.505  		--rc genhtml_branch_coverage=1
00:08:59.505  		--rc genhtml_function_coverage=1
00:08:59.505  		--rc genhtml_legend=1
00:08:59.505  		--rc geninfo_all_blocks=1
00:08:59.505  		--rc geninfo_unexecuted_blocks=1
00:08:59.505  		
00:08:59.505  		'
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:08:59.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:59.505  		--rc genhtml_branch_coverage=1
00:08:59.505  		--rc genhtml_function_coverage=1
00:08:59.505  		--rc genhtml_legend=1
00:08:59.505  		--rc geninfo_all_blocks=1
00:08:59.505  		--rc geninfo_unexecuted_blocks=1
00:08:59.505  		
00:08:59.505  		'
00:08:59.505    04:59:13 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:08:59.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:08:59.505  		--rc genhtml_branch_coverage=1
00:08:59.505  		--rc genhtml_function_coverage=1
00:08:59.505  		--rc genhtml_legend=1
00:08:59.505  		--rc geninfo_all_blocks=1
00:08:59.505  		--rc geninfo_unexecuted_blocks=1
00:08:59.505  		
00:08:59.505  		'
00:08:59.505   04:59:13 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:08:59.505   04:59:13 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:08:59.505   04:59:13 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:08:59.505   04:59:13 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:08:59.505   04:59:13 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:08:59.505   04:59:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:08:59.505  ************************************
00:08:59.505  START TEST skip_rpc
00:08:59.505  ************************************
00:08:59.505   04:59:13 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:08:59.505   04:59:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=123736
00:08:59.505   04:59:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:08:59.505   04:59:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:08:59.505   04:59:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:08:59.764  [2024-11-20 04:59:13.511775] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:08:59.764  [2024-11-20 04:59:13.512096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123736 ]
00:08:59.764  [2024-11-20 04:59:13.664738] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:08:59.764  [2024-11-20 04:59:13.697924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:00.023  [2024-11-20 04:59:13.744251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:05.293    04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 123736
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 123736 ']'
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 123736
00:09:05.293    04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:05.293    04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123736
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:05.293  killing process with pid 123736
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123736'
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 123736
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 123736
00:09:05.293  
00:09:05.293  real	0m5.436s
00:09:05.293  user	0m4.998s
00:09:05.293  sys	0m0.354s
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:05.293  ************************************
00:09:05.293  END TEST skip_rpc
00:09:05.293  ************************************
00:09:05.293   04:59:18 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:05.293   04:59:18 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:09:05.293   04:59:18 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:05.293   04:59:18 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:05.293   04:59:18 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:05.293  ************************************
00:09:05.293  START TEST skip_rpc_with_json
00:09:05.293  ************************************
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=123836
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 123836
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 123836 ']'
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:05.293  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:05.293   04:59:18 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:05.293  [2024-11-20 04:59:18.986987] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:05.293  [2024-11-20 04:59:18.987460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123836 ]
00:09:05.293  [2024-11-20 04:59:19.135608] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:05.293  [2024-11-20 04:59:19.163992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:05.293  [2024-11-20 04:59:19.202535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:06.228  [2024-11-20 04:59:19.929739] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:09:06.228  request:
00:09:06.228  {
00:09:06.228  "trtype": "tcp",
00:09:06.228  "method": "nvmf_get_transports",
00:09:06.228  "req_id": 1
00:09:06.228  }
00:09:06.228  Got JSON-RPC error response
00:09:06.228  response:
00:09:06.228  {
00:09:06.228  "code": -19,
00:09:06.228  "message": "No such device"
00:09:06.228  }
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:06.228  [2024-11-20 04:59:19.937918] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:06.228   04:59:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:06.229   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:06.229   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:06.229  {
00:09:06.229  "subsystems": [
00:09:06.229  {
00:09:06.229  "subsystem": "scheduler",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "framework_set_scheduler",
00:09:06.229  "params": {
00:09:06.229  "name": "static"
00:09:06.229  }
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "vmd",
00:09:06.229  "config": []
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "sock",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "sock_set_default_impl",
00:09:06.229  "params": {
00:09:06.229  "impl_name": "posix"
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "sock_impl_set_options",
00:09:06.229  "params": {
00:09:06.229  "impl_name": "ssl",
00:09:06.229  "recv_buf_size": 4096,
00:09:06.229  "send_buf_size": 4096,
00:09:06.229  "enable_recv_pipe": true,
00:09:06.229  "enable_quickack": false,
00:09:06.229  "enable_placement_id": 0,
00:09:06.229  "enable_zerocopy_send_server": true,
00:09:06.229  "enable_zerocopy_send_client": false,
00:09:06.229  "zerocopy_threshold": 0,
00:09:06.229  "tls_version": 0,
00:09:06.229  "enable_ktls": false
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "sock_impl_set_options",
00:09:06.229  "params": {
00:09:06.229  "impl_name": "posix",
00:09:06.229  "recv_buf_size": 2097152,
00:09:06.229  "send_buf_size": 2097152,
00:09:06.229  "enable_recv_pipe": true,
00:09:06.229  "enable_quickack": false,
00:09:06.229  "enable_placement_id": 0,
00:09:06.229  "enable_zerocopy_send_server": true,
00:09:06.229  "enable_zerocopy_send_client": false,
00:09:06.229  "zerocopy_threshold": 0,
00:09:06.229  "tls_version": 0,
00:09:06.229  "enable_ktls": false
00:09:06.229  }
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "iobuf",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "iobuf_set_options",
00:09:06.229  "params": {
00:09:06.229  "small_pool_count": 8192,
00:09:06.229  "large_pool_count": 1024,
00:09:06.229  "small_bufsize": 8192,
00:09:06.229  "large_bufsize": 135168,
00:09:06.229  "enable_numa": false
00:09:06.229  }
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "keyring",
00:09:06.229  "config": []
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "fsdev",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "fsdev_set_opts",
00:09:06.229  "params": {
00:09:06.229  "fsdev_io_pool_size": 65535,
00:09:06.229  "fsdev_io_cache_size": 256
00:09:06.229  }
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "accel",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "accel_set_options",
00:09:06.229  "params": {
00:09:06.229  "small_cache_size": 128,
00:09:06.229  "large_cache_size": 16,
00:09:06.229  "task_count": 2048,
00:09:06.229  "sequence_count": 2048,
00:09:06.229  "buf_count": 2048
00:09:06.229  }
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "bdev",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "bdev_set_options",
00:09:06.229  "params": {
00:09:06.229  "bdev_io_pool_size": 65535,
00:09:06.229  "bdev_io_cache_size": 256,
00:09:06.229  "bdev_auto_examine": true,
00:09:06.229  "iobuf_small_cache_size": 128,
00:09:06.229  "iobuf_large_cache_size": 16
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "bdev_raid_set_options",
00:09:06.229  "params": {
00:09:06.229  "process_window_size_kb": 1024,
00:09:06.229  "process_max_bandwidth_mb_sec": 0
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "bdev_nvme_set_options",
00:09:06.229  "params": {
00:09:06.229  "action_on_timeout": "none",
00:09:06.229  "timeout_us": 0,
00:09:06.229  "timeout_admin_us": 0,
00:09:06.229  "keep_alive_timeout_ms": 10000,
00:09:06.229  "arbitration_burst": 0,
00:09:06.229  "low_priority_weight": 0,
00:09:06.229  "medium_priority_weight": 0,
00:09:06.229  "high_priority_weight": 0,
00:09:06.229  "nvme_adminq_poll_period_us": 10000,
00:09:06.229  "nvme_ioq_poll_period_us": 0,
00:09:06.229  "io_queue_requests": 0,
00:09:06.229  "delay_cmd_submit": true,
00:09:06.229  "transport_retry_count": 4,
00:09:06.229  "bdev_retry_count": 3,
00:09:06.229  "transport_ack_timeout": 0,
00:09:06.229  "ctrlr_loss_timeout_sec": 0,
00:09:06.229  "reconnect_delay_sec": 0,
00:09:06.229  "fast_io_fail_timeout_sec": 0,
00:09:06.229  "disable_auto_failback": false,
00:09:06.229  "generate_uuids": false,
00:09:06.229  "transport_tos": 0,
00:09:06.229  "nvme_error_stat": false,
00:09:06.229  "rdma_srq_size": 0,
00:09:06.229  "io_path_stat": false,
00:09:06.229  "allow_accel_sequence": false,
00:09:06.229  "rdma_max_cq_size": 0,
00:09:06.229  "rdma_cm_event_timeout_ms": 0,
00:09:06.229  "dhchap_digests": [
00:09:06.229  "sha256",
00:09:06.229  "sha384",
00:09:06.229  "sha512"
00:09:06.229  ],
00:09:06.229  "dhchap_dhgroups": [
00:09:06.229  "null",
00:09:06.229  "ffdhe2048",
00:09:06.229  "ffdhe3072",
00:09:06.229  "ffdhe4096",
00:09:06.229  "ffdhe6144",
00:09:06.229  "ffdhe8192"
00:09:06.229  ]
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "bdev_nvme_set_hotplug",
00:09:06.229  "params": {
00:09:06.229  "period_us": 100000,
00:09:06.229  "enable": false
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "bdev_iscsi_set_options",
00:09:06.229  "params": {
00:09:06.229  "timeout_sec": 30
00:09:06.229  }
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "method": "bdev_wait_for_examine"
00:09:06.229  }
00:09:06.229  ]
00:09:06.229  },
00:09:06.229  {
00:09:06.229  "subsystem": "nvmf",
00:09:06.229  "config": [
00:09:06.229  {
00:09:06.229  "method": "nvmf_set_config",
00:09:06.229  "params": {
00:09:06.229  "discovery_filter": "match_any",
00:09:06.229  "admin_cmd_passthru": {
00:09:06.229  "identify_ctrlr": false
00:09:06.229  },
00:09:06.229  "dhchap_digests": [
00:09:06.229  "sha256",
00:09:06.229  "sha384",
00:09:06.229  "sha512"
00:09:06.229  ],
00:09:06.229  "dhchap_dhgroups": [
00:09:06.229  "null",
00:09:06.229  "ffdhe2048",
00:09:06.229  "ffdhe3072",
00:09:06.229  "ffdhe4096",
00:09:06.229  "ffdhe6144",
00:09:06.230  "ffdhe8192"
00:09:06.230  ]
00:09:06.230  }
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "method": "nvmf_set_max_subsystems",
00:09:06.230  "params": {
00:09:06.230  "max_subsystems": 1024
00:09:06.230  }
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "method": "nvmf_set_crdt",
00:09:06.230  "params": {
00:09:06.230  "crdt1": 0,
00:09:06.230  "crdt2": 0,
00:09:06.230  "crdt3": 0
00:09:06.230  }
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "method": "nvmf_create_transport",
00:09:06.230  "params": {
00:09:06.230  "trtype": "TCP",
00:09:06.230  "max_queue_depth": 128,
00:09:06.230  "max_io_qpairs_per_ctrlr": 127,
00:09:06.230  "in_capsule_data_size": 4096,
00:09:06.230  "max_io_size": 131072,
00:09:06.230  "io_unit_size": 131072,
00:09:06.230  "max_aq_depth": 128,
00:09:06.230  "num_shared_buffers": 511,
00:09:06.230  "buf_cache_size": 4294967295,
00:09:06.230  "dif_insert_or_strip": false,
00:09:06.230  "zcopy": false,
00:09:06.230  "c2h_success": true,
00:09:06.230  "sock_priority": 0,
00:09:06.230  "abort_timeout_sec": 1,
00:09:06.230  "ack_timeout": 0,
00:09:06.230  "data_wr_pool_size": 0
00:09:06.230  }
00:09:06.230  }
00:09:06.230  ]
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "subsystem": "nbd",
00:09:06.230  "config": []
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "subsystem": "vhost_blk",
00:09:06.230  "config": []
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "subsystem": "scsi",
00:09:06.230  "config": null
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "subsystem": "iscsi",
00:09:06.230  "config": [
00:09:06.230  {
00:09:06.230  "method": "iscsi_set_options",
00:09:06.230  "params": {
00:09:06.230  "node_base": "iqn.2016-06.io.spdk",
00:09:06.230  "max_sessions": 128,
00:09:06.230  "max_connections_per_session": 2,
00:09:06.230  "max_queue_depth": 64,
00:09:06.230  "default_time2wait": 2,
00:09:06.230  "default_time2retain": 20,
00:09:06.230  "first_burst_length": 8192,
00:09:06.230  "immediate_data": true,
00:09:06.230  "allow_duplicated_isid": false,
00:09:06.230  "error_recovery_level": 0,
00:09:06.230  "nop_timeout": 60,
00:09:06.230  "nop_in_interval": 30,
00:09:06.230  "disable_chap": false,
00:09:06.230  "require_chap": false,
00:09:06.230  "mutual_chap": false,
00:09:06.230  "chap_group": 0,
00:09:06.230  "max_large_datain_per_connection": 64,
00:09:06.230  "max_r2t_per_connection": 4,
00:09:06.230  "pdu_pool_size": 36864,
00:09:06.230  "immediate_data_pool_size": 16384,
00:09:06.230  "data_out_pool_size": 2048
00:09:06.230  }
00:09:06.230  }
00:09:06.230  ]
00:09:06.230  },
00:09:06.230  {
00:09:06.230  "subsystem": "vhost_scsi",
00:09:06.230  "config": []
00:09:06.230  }
00:09:06.230  ]
00:09:06.230  }
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 123836
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 123836 ']'
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 123836
00:09:06.230    04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:06.230    04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123836
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:06.230  killing process with pid 123836
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123836'
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 123836
00:09:06.230   04:59:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 123836
00:09:06.809   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=123869
00:09:06.809   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:09:06.809   04:59:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:12.094   04:59:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 123869
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 123869 ']'
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 123869
00:09:12.095    04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:12.095    04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123869
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:12.095  killing process with pid 123869
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123869'
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 123869
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 123869
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:09:12.095  
00:09:12.095  real	0m7.025s
00:09:12.095  user	0m6.624s
00:09:12.095  sys	0m0.719s
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:12.095  ************************************
00:09:12.095  END TEST skip_rpc_with_json
00:09:12.095  ************************************
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:09:12.095   04:59:25 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:09:12.095   04:59:25 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:12.095   04:59:25 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:12.095   04:59:25 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:12.095  ************************************
00:09:12.095  START TEST skip_rpc_with_delay
00:09:12.095  ************************************
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:12.095    04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:12.095    04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:12.095   04:59:25 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:12.095   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:12.095   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:09:12.095   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:09:12.354  [2024-11-20 04:59:26.073530] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:12.354  
00:09:12.354  real	0m0.137s
00:09:12.354  user	0m0.074s
00:09:12.354  sys	0m0.063s
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:12.354   04:59:26 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:09:12.354  ************************************
00:09:12.354  END TEST skip_rpc_with_delay
00:09:12.354  ************************************
00:09:12.354    04:59:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:09:12.354   04:59:26 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']'
00:09:12.354   04:59:26 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init
00:09:12.354   04:59:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:12.354   04:59:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:12.354   04:59:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:12.354  ************************************
00:09:12.354  START TEST exit_on_failed_rpc_init
00:09:12.354  ************************************
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=123993
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 123993
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 123993 ']'
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:12.354  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:12.354   04:59:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:12.354  [2024-11-20 04:59:26.266810] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:12.354  [2024-11-20 04:59:26.267086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid123993 ]
00:09:12.613  [2024-11-20 04:59:26.416413] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:12.613  [2024-11-20 04:59:26.443871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:12.613  [2024-11-20 04:59:26.478274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:13.549    04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:13.549    04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:09:13.549   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2
00:09:13.549  [2024-11-20 04:59:27.322977] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:13.549  [2024-11-20 04:59:27.323266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124016 ]
00:09:13.549  [2024-11-20 04:59:27.472314] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:13.549  [2024-11-20 04:59:27.499940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:13.808  [2024-11-20 04:59:27.532389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:13.808  [2024-11-20 04:59:27.532543] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another.
00:09:13.808  [2024-11-20 04:59:27.532579] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock
00:09:13.808  [2024-11-20 04:59:27.532622] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 123993
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 123993 ']'
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 123993
00:09:13.808    04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:13.808    04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 123993
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:13.808  killing process with pid 123993
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 123993'
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 123993
00:09:13.808   04:59:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 123993
00:09:14.376  
00:09:14.376  real	0m1.868s
00:09:14.376  user	0m2.064s
00:09:14.376  sys	0m0.527s
00:09:14.376   04:59:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.376   04:59:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x
00:09:14.376  ************************************
00:09:14.376  END TEST exit_on_failed_rpc_init
00:09:14.376  ************************************
00:09:14.376   04:59:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:09:14.376  
00:09:14.376  real	0m14.862s
00:09:14.376  user	0m14.001s
00:09:14.376  sys	0m1.819s
00:09:14.376   04:59:28 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.376   04:59:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:14.376  ************************************
00:09:14.376  END TEST skip_rpc
00:09:14.376  ************************************
00:09:14.376   04:59:28  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:09:14.376   04:59:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:14.376   04:59:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:14.376   04:59:28  -- common/autotest_common.sh@10 -- # set +x
00:09:14.376  ************************************
00:09:14.376  START TEST rpc_client
00:09:14.376  ************************************
00:09:14.376   04:59:28 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:09:14.376  * Looking for test storage...
00:09:14.376  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:09:14.376    04:59:28 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:14.376     04:59:28 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version
00:09:14.376     04:59:28 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:14.376    04:59:28 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:09:14.376    04:59:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@345 -- # : 1
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@353 -- # local d=1
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@355 -- # echo 1
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@353 -- # local d=2
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:14.377     04:59:28 rpc_client -- scripts/common.sh@355 -- # echo 2
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:14.377    04:59:28 rpc_client -- scripts/common.sh@368 -- # return 0
00:09:14.377    04:59:28 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:14.377    04:59:28 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:14.377  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.377  		--rc genhtml_branch_coverage=1
00:09:14.377  		--rc genhtml_function_coverage=1
00:09:14.377  		--rc genhtml_legend=1
00:09:14.377  		--rc geninfo_all_blocks=1
00:09:14.377  		--rc geninfo_unexecuted_blocks=1
00:09:14.377  		
00:09:14.377  		'
00:09:14.377    04:59:28 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:14.377  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.377  		--rc genhtml_branch_coverage=1
00:09:14.377  		--rc genhtml_function_coverage=1
00:09:14.377  		--rc genhtml_legend=1
00:09:14.377  		--rc geninfo_all_blocks=1
00:09:14.377  		--rc geninfo_unexecuted_blocks=1
00:09:14.377  		
00:09:14.377  		'
00:09:14.377    04:59:28 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:14.377  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.377  		--rc genhtml_branch_coverage=1
00:09:14.377  		--rc genhtml_function_coverage=1
00:09:14.377  		--rc genhtml_legend=1
00:09:14.377  		--rc geninfo_all_blocks=1
00:09:14.377  		--rc geninfo_unexecuted_blocks=1
00:09:14.377  		
00:09:14.377  		'
00:09:14.377    04:59:28 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:14.377  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.377  		--rc genhtml_branch_coverage=1
00:09:14.377  		--rc genhtml_function_coverage=1
00:09:14.377  		--rc genhtml_legend=1
00:09:14.377  		--rc geninfo_all_blocks=1
00:09:14.377  		--rc geninfo_unexecuted_blocks=1
00:09:14.377  		
00:09:14.377  		'
00:09:14.377   04:59:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:09:14.636  OK
00:09:14.636   04:59:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:09:14.636  
00:09:14.636  real	0m0.237s
00:09:14.636  user	0m0.167s
00:09:14.636  sys	0m0.085s
00:09:14.636   04:59:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:14.636  ************************************
00:09:14.636  END TEST rpc_client
00:09:14.636  ************************************
00:09:14.636   04:59:28 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:09:14.636   04:59:28  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:09:14.636   04:59:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:14.636   04:59:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:14.636   04:59:28  -- common/autotest_common.sh@10 -- # set +x
00:09:14.636  ************************************
00:09:14.636  START TEST json_config
00:09:14.636  ************************************
00:09:14.636   04:59:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:09:14.636    04:59:28 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:14.636     04:59:28 json_config -- common/autotest_common.sh@1693 -- # lcov --version
00:09:14.636     04:59:28 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:14.636    04:59:28 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:14.636    04:59:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:14.636    04:59:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:14.636    04:59:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:14.636    04:59:28 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:09:14.637    04:59:28 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:09:14.637    04:59:28 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:09:14.637    04:59:28 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:09:14.637    04:59:28 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:09:14.637    04:59:28 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:09:14.637    04:59:28 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:09:14.637    04:59:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:14.637    04:59:28 json_config -- scripts/common.sh@344 -- # case "$op" in
00:09:14.637    04:59:28 json_config -- scripts/common.sh@345 -- # : 1
00:09:14.637    04:59:28 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:14.637    04:59:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:14.637     04:59:28 json_config -- scripts/common.sh@365 -- # decimal 1
00:09:14.637     04:59:28 json_config -- scripts/common.sh@353 -- # local d=1
00:09:14.637     04:59:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:14.637     04:59:28 json_config -- scripts/common.sh@355 -- # echo 1
00:09:14.637    04:59:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:09:14.637     04:59:28 json_config -- scripts/common.sh@366 -- # decimal 2
00:09:14.637     04:59:28 json_config -- scripts/common.sh@353 -- # local d=2
00:09:14.637     04:59:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:14.637     04:59:28 json_config -- scripts/common.sh@355 -- # echo 2
00:09:14.637    04:59:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:09:14.637    04:59:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:14.637    04:59:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:14.637    04:59:28 json_config -- scripts/common.sh@368 -- # return 0
00:09:14.637    04:59:28 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:14.637    04:59:28 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:14.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.637  		--rc genhtml_branch_coverage=1
00:09:14.637  		--rc genhtml_function_coverage=1
00:09:14.637  		--rc genhtml_legend=1
00:09:14.637  		--rc geninfo_all_blocks=1
00:09:14.637  		--rc geninfo_unexecuted_blocks=1
00:09:14.637  		
00:09:14.637  		'
00:09:14.637    04:59:28 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:14.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.637  		--rc genhtml_branch_coverage=1
00:09:14.637  		--rc genhtml_function_coverage=1
00:09:14.637  		--rc genhtml_legend=1
00:09:14.637  		--rc geninfo_all_blocks=1
00:09:14.637  		--rc geninfo_unexecuted_blocks=1
00:09:14.637  		
00:09:14.637  		'
00:09:14.637    04:59:28 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:14.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.637  		--rc genhtml_branch_coverage=1
00:09:14.637  		--rc genhtml_function_coverage=1
00:09:14.637  		--rc genhtml_legend=1
00:09:14.637  		--rc geninfo_all_blocks=1
00:09:14.637  		--rc geninfo_unexecuted_blocks=1
00:09:14.637  		
00:09:14.637  		'
00:09:14.637    04:59:28 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:14.637  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:14.637  		--rc genhtml_branch_coverage=1
00:09:14.637  		--rc genhtml_function_coverage=1
00:09:14.637  		--rc genhtml_legend=1
00:09:14.637  		--rc geninfo_all_blocks=1
00:09:14.637  		--rc geninfo_unexecuted_blocks=1
00:09:14.637  		
00:09:14.637  		'
00:09:14.637   04:59:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:14.637     04:59:28 json_config -- nvmf/common.sh@7 -- # uname -s
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:14.637    04:59:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:14.637     04:59:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:810b4abf-b355-4685-9aff-049cc51d81f8
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=810b4abf-b355-4685-9aff-049cc51d81f8
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:14.896    04:59:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:14.896     04:59:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob
00:09:14.896     04:59:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:14.896     04:59:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:14.896     04:59:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:14.896      04:59:28 json_config -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:14.897      04:59:28 json_config -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:14.897      04:59:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:14.897      04:59:28 json_config -- paths/export.sh@5 -- # export PATH
00:09:14.897      04:59:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@51 -- # : 0
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:14.897  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:14.897    04:59:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:14.897  INFO: JSON configuration test init
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:14.897   04:59:28 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:09:14.897   04:59:28 json_config -- json_config/common.sh@9 -- # local app=target
00:09:14.897   04:59:28 json_config -- json_config/common.sh@10 -- # shift
00:09:14.897   04:59:28 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:14.897   04:59:28 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:14.897   04:59:28 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:14.897   04:59:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:14.897   04:59:28 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:14.897   04:59:28 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=124162
00:09:14.897  Waiting for target to run...
00:09:14.897   04:59:28 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:14.897   04:59:28 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:09:14.897   04:59:28 json_config -- json_config/common.sh@25 -- # waitforlisten 124162 /var/tmp/spdk_tgt.sock
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@835 -- # '[' -z 124162 ']'
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:14.897  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:14.897   04:59:28 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:14.897  [2024-11-20 04:59:28.675306] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:14.897  [2024-11-20 04:59:28.675593] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124162 ]
00:09:15.157  [2024-11-20 04:59:29.078272] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:15.157  [2024-11-20 04:59:29.104379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:15.420  [2024-11-20 04:59:29.137828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:15.989  
00:09:15.989   04:59:29 json_config -- json_config/common.sh@26 -- # echo ''
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:15.989   04:59:29 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:09:15.989   04:59:29 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:09:15.989   04:59:29 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:09:16.247   04:59:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:16.247   04:59:30 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]]
00:09:16.247   04:59:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister")
00:09:16.247    04:59:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:09:16.247    04:59:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:09:16.247    04:59:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister')
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@51 -- # local get_types
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@54 -- # sort
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:09:16.506   04:59:30 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:16.506   04:59:30 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@62 -- # return 0
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@285 -- # [[ 1 -eq 1 ]]
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@286 -- # create_bdev_subsystem_config
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@112 -- # timing_enter create_bdev_subsystem_config
00:09:16.506   04:59:30 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:16.506   04:59:30 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@114 -- # expected_notifications=()
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@114 -- # local expected_notifications
00:09:16.506   04:59:30 json_config -- json_config/json_config.sh@118 -- # expected_notifications+=($(get_notifications))
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@118 -- # get_notifications
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:16.506    04:59:30 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:16.506     04:59:30 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:09:16.506     04:59:30 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:09:16.506     04:59:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:09:16.765    04:59:30 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:09:16.765    04:59:30 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:16.765    04:59:30 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:16.765   04:59:30 json_config -- json_config/json_config.sh@120 -- # [[ 1 -eq 1 ]]
00:09:16.765   04:59:30 json_config -- json_config/json_config.sh@121 -- # local lvol_store_base_bdev=Nvme0n1
00:09:16.765   04:59:30 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:09:16.765   04:59:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:09:17.024  Nvme0n1p0 Nvme0n1p1
00:09:17.024   04:59:30 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_split_create Malloc0 3
00:09:17.024   04:59:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:09:17.282  [2024-11-20 04:59:31.098371] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:17.282  [2024-11-20 04:59:31.098467] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:17.282  
00:09:17.282   04:59:31 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:09:17.282   04:59:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:09:17.541  Malloc3
00:09:17.541   04:59:31 json_config -- json_config/json_config.sh@126 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:09:17.541   04:59:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:09:17.800  [2024-11-20 04:59:31.598556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:17.800  [2024-11-20 04:59:31.598674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:17.800  [2024-11-20 04:59:31.598736] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80
00:09:17.800  [2024-11-20 04:59:31.598755] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:17.800  [2024-11-20 04:59:31.600985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:17.800  [2024-11-20 04:59:31.601048] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:09:17.800  PTBdevFromMalloc3
00:09:17.800   04:59:31 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_null_create Null0 32 512
00:09:17.800   04:59:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:09:18.059  Null0
00:09:18.059   04:59:31 json_config -- json_config/json_config.sh@130 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:09:18.059   04:59:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:09:18.318  Malloc0
00:09:18.318   04:59:32 json_config -- json_config/json_config.sh@131 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:09:18.318   04:59:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:09:18.577  Malloc1
00:09:18.577   04:59:32 json_config -- json_config/json_config.sh@144 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:09:18.577   04:59:32 json_config -- json_config/json_config.sh@147 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:09:18.836  102400+0 records in
00:09:18.836  102400+0 records out
00:09:18.836  104857600 bytes (105 MB, 100 MiB) copied, 0.255408 s, 411 MB/s
00:09:18.836   04:59:32 json_config -- json_config/json_config.sh@148 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:09:18.836   04:59:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:09:19.094  aio_disk
00:09:19.094   04:59:32 json_config -- json_config/json_config.sh@149 -- # expected_notifications+=(bdev_register:aio_disk)
00:09:19.094   04:59:32 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:09:19.094   04:59:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:09:19.353  78654709-f3ed-4b21-b6c3-d117c31df279
00:09:19.353   04:59:33 json_config -- json_config/json_config.sh@161 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:09:19.353    04:59:33 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:09:19.353    04:59:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:09:19.612    04:59:33 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:09:19.612    04:59:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:09:19.871    04:59:33 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:19.871    04:59:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:09:20.129    04:59:33 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:20.129    04:59:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@164 -- # [[ 0 -eq 1 ]]
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@179 -- # [[ 0 -eq 1 ]]
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@185 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f bdev_register:cee38bd8-613f-48bb-9203-884259161e05 bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22 bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@74 -- # local events_to_check
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@75 -- # local recorded_events
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@78 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:09:20.387    04:59:34 json_config -- json_config/json_config.sh@78 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f bdev_register:cee38bd8-613f-48bb-9203-884259161e05 bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22 bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf
00:09:20.387    04:59:34 json_config -- json_config/json_config.sh@78 -- # sort
00:09:20.387   04:59:34 json_config -- json_config/json_config.sh@79 -- # recorded_events=($(get_notifications | sort))
00:09:20.388    04:59:34 json_config -- json_config/json_config.sh@79 -- # get_notifications
00:09:20.388    04:59:34 json_config -- json_config/json_config.sh@79 -- # sort
00:09:20.388    04:59:34 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:09:20.388    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.388    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.388     04:59:34 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:09:20.388     04:59:34 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:09:20.388     04:59:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p1
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p0
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc3
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:PTBdevFromMalloc3
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Null0
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p2
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p1
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p0
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc1
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:aio_disk
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:cee38bd8-613f-48bb-9203-884259161e05
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@81 -- # [[ bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:cee38bd8-613f-48bb-9203-884259161e05 bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\4\7\c\c\2\f\a\-\2\5\1\8\-\4\b\b\8\-\8\0\0\b\-\2\e\e\8\6\5\5\6\1\0\b\f\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\a\2\0\8\f\9\4\-\3\9\0\c\-\4\a\3\d\-\b\7\9\c\-\8\a\7\4\4\9\f\9\1\a\2\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\c\e\e\3\8\b\d\8\-\6\1\3\f\-\4\8\b\b\-\9\2\0\3\-\8\8\4\2\5\9\1\6\1\e\0\5\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\d\c\6\6\8\2\9\e\-\4\a\a\a\-\4\8\f\9\-\a\5\d\8\-\8\b\2\2\5\a\6\8\e\0\8\f ]]
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@93 -- # cat
00:09:20.647    04:59:34 json_config -- json_config/json_config.sh@93 -- # printf ' %s\n' bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:cee38bd8-613f-48bb-9203-884259161e05 bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f
00:09:20.647  Expected events matched:
00:09:20.647   bdev_register:247cc2fa-2518-4bb8-800b-2ee8655610bf
00:09:20.647   bdev_register:8a208f94-390c-4a3d-b79c-8a7449f91a22
00:09:20.647   bdev_register:Malloc0
00:09:20.647   bdev_register:Malloc0p0
00:09:20.647   bdev_register:Malloc0p1
00:09:20.647   bdev_register:Malloc0p2
00:09:20.647   bdev_register:Malloc1
00:09:20.647   bdev_register:Malloc3
00:09:20.647   bdev_register:Null0
00:09:20.647   bdev_register:Nvme0n1
00:09:20.647   bdev_register:Nvme0n1p0
00:09:20.647   bdev_register:Nvme0n1p1
00:09:20.647   bdev_register:PTBdevFromMalloc3
00:09:20.647   bdev_register:aio_disk
00:09:20.647   bdev_register:cee38bd8-613f-48bb-9203-884259161e05
00:09:20.647   bdev_register:dc66829e-4aaa-48f9-a5d8-8b225a68e08f
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@187 -- # timing_exit create_bdev_subsystem_config
00:09:20.647   04:59:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:20.647   04:59:34 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]]
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:09:20.647   04:59:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:20.647   04:59:34 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:20.647   04:59:34 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:09:20.648   04:59:34 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:20.648   04:59:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:09:20.906  MallocBdevForConfigChangeCheck
00:09:20.906   04:59:34 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:09:20.906   04:59:34 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:20.906   04:59:34 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:20.906   04:59:34 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:09:20.906   04:59:34 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:21.474  INFO: shutting down applications...
00:09:21.474   04:59:35 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:09:21.474   04:59:35 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:09:21.474   04:59:35 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:09:21.474   04:59:35 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:09:21.474   04:59:35 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:09:21.474  [2024-11-20 04:59:35.409726] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:09:21.733  Calling clear_vhost_scsi_subsystem
00:09:21.733  Calling clear_iscsi_subsystem
00:09:21.733  Calling clear_vhost_blk_subsystem
00:09:21.733  Calling clear_nbd_subsystem
00:09:21.733  Calling clear_nvmf_subsystem
00:09:21.733  Calling clear_bdev_subsystem
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@350 -- # count=100
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:09:21.733   04:59:35 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:09:22.300   04:59:36 json_config -- json_config/json_config.sh@352 -- # break
00:09:22.300   04:59:36 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:09:22.300   04:59:36 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:09:22.300   04:59:36 json_config -- json_config/common.sh@31 -- # local app=target
00:09:22.300   04:59:36 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:22.300   04:59:36 json_config -- json_config/common.sh@35 -- # [[ -n 124162 ]]
00:09:22.300   04:59:36 json_config -- json_config/common.sh@38 -- # kill -SIGINT 124162
00:09:22.300   04:59:36 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:22.300   04:59:36 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:22.300   04:59:36 json_config -- json_config/common.sh@41 -- # kill -0 124162
00:09:22.300   04:59:36 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:09:22.867  SPDK target shutdown done
00:09:22.867   04:59:36 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:09:22.867   04:59:36 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:22.867   04:59:36 json_config -- json_config/common.sh@41 -- # kill -0 124162
00:09:22.867   04:59:36 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:22.867   04:59:36 json_config -- json_config/common.sh@43 -- # break
00:09:22.867   04:59:36 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:22.867   04:59:36 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:22.867  INFO: relaunching applications...
00:09:22.867   04:59:36 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:09:22.867  Waiting for target to run...
00:09:22.867  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:22.867   04:59:36 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:22.867   04:59:36 json_config -- json_config/common.sh@9 -- # local app=target
00:09:22.867   04:59:36 json_config -- json_config/common.sh@10 -- # shift
00:09:22.867   04:59:36 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:22.867   04:59:36 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:22.867   04:59:36 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:09:22.867   04:59:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:22.867   04:59:36 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:22.867   04:59:36 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=124425
00:09:22.867   04:59:36 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:22.867   04:59:36 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:22.867   04:59:36 json_config -- json_config/common.sh@25 -- # waitforlisten 124425 /var/tmp/spdk_tgt.sock
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@835 -- # '[' -z 124425 ']'
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:22.867   04:59:36 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:22.867  [2024-11-20 04:59:36.610485] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:22.867  [2024-11-20 04:59:36.611089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124425 ]
00:09:23.434  [2024-11-20 04:59:37.158383] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:23.434  [2024-11-20 04:59:37.187345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:23.434  [2024-11-20 04:59:37.230125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:23.693  [2024-11-20 04:59:37.401357] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:23.693  [2024-11-20 04:59:37.401661] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:09:23.693  [2024-11-20 04:59:37.409325] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:23.693  [2024-11-20 04:59:37.409536] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:09:23.693  [2024-11-20 04:59:37.417370] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:23.693  [2024-11-20 04:59:37.417587] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:09:23.693  [2024-11-20 04:59:37.417729] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:09:23.693  [2024-11-20 04:59:37.501634] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:09:23.693  [2024-11-20 04:59:37.501879] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:09:23.693  [2024-11-20 04:59:37.501976] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980
00:09:23.693  [2024-11-20 04:59:37.502198] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:09:23.693  [2024-11-20 04:59:37.502739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:09:23.693  [2024-11-20 04:59:37.502918] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:09:23.693  
00:09:23.693  INFO: Checking if target configuration is the same...
00:09:23.693   04:59:37 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:23.693   04:59:37 json_config -- common/autotest_common.sh@868 -- # return 0
00:09:23.693   04:59:37 json_config -- json_config/common.sh@26 -- # echo ''
00:09:23.693   04:59:37 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:09:23.693   04:59:37 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:09:23.693   04:59:37 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:23.693    04:59:37 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:09:23.693    04:59:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:23.693  + '[' 2 -ne 2 ']'
00:09:23.693  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:23.693  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:23.693  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:23.693  +++ basename /dev/fd/62
00:09:23.693  ++ mktemp /tmp/62.XXX
00:09:23.693  + tmp_file_1=/tmp/62.s7y
00:09:23.693  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:23.693  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:23.693  + tmp_file_2=/tmp/spdk_tgt_config.json.R7T
00:09:23.693  + ret=0
00:09:23.693  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:24.261  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:24.261  + diff -u /tmp/62.s7y /tmp/spdk_tgt_config.json.R7T
00:09:24.261  + echo 'INFO: JSON config files are the same'
00:09:24.261  INFO: JSON config files are the same
00:09:24.261  + rm /tmp/62.s7y /tmp/spdk_tgt_config.json.R7T
00:09:24.261  + exit 0
00:09:24.261   04:59:37 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:09:24.261   04:59:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:09:24.261  INFO: changing configuration and checking if this can be detected...
00:09:24.261   04:59:37 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:24.261   04:59:37 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:09:24.519   04:59:38 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:24.519    04:59:38 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:09:24.519    04:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:09:24.519  + '[' 2 -ne 2 ']'
00:09:24.519  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:09:24.519  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:09:24.519  + rootdir=/home/vagrant/spdk_repo/spdk
00:09:24.519  +++ basename /dev/fd/62
00:09:24.519  ++ mktemp /tmp/62.XXX
00:09:24.519  + tmp_file_1=/tmp/62.h0t
00:09:24.519  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:24.519  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:09:24.519  + tmp_file_2=/tmp/spdk_tgt_config.json.8cO
00:09:24.519  + ret=0
00:09:24.519  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:24.778  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:09:24.778  + diff -u /tmp/62.h0t /tmp/spdk_tgt_config.json.8cO
00:09:24.778  + ret=1
00:09:24.778  + echo '=== Start of file: /tmp/62.h0t ==='
00:09:24.778  + cat /tmp/62.h0t
00:09:24.778  + echo '=== End of file: /tmp/62.h0t ==='
00:09:24.778  + echo ''
00:09:24.778  + echo '=== Start of file: /tmp/spdk_tgt_config.json.8cO ==='
00:09:24.778  + cat /tmp/spdk_tgt_config.json.8cO
00:09:24.778  + echo '=== End of file: /tmp/spdk_tgt_config.json.8cO ==='
00:09:24.778  + echo ''
00:09:24.778  + rm /tmp/62.h0t /tmp/spdk_tgt_config.json.8cO
00:09:24.778  + exit 1
00:09:24.778  INFO: configuration change detected.
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:09:24.778   04:59:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:24.778   04:59:38 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@324 -- # [[ -n 124425 ]]
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:09:24.778   04:59:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:24.778   04:59:38 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@193 -- # [[ 1 -eq 1 ]]
00:09:24.778   04:59:38 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:09:24.778   04:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:09:25.036   04:59:38 json_config -- json_config/json_config.sh@195 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:09:25.036   04:59:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:09:25.293   04:59:39 json_config -- json_config/json_config.sh@196 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:09:25.293   04:59:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:09:25.551   04:59:39 json_config -- json_config/json_config.sh@197 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:09:25.551   04:59:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:09:25.825    04:59:39 json_config -- json_config/json_config.sh@200 -- # uname -s
00:09:25.825   04:59:39 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]]
00:09:25.825   04:59:39 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio
00:09:25.825   04:59:39 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:09:25.825   04:59:39 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:09:25.825   04:59:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:25.825   04:59:39 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:26.108   04:59:39 json_config -- json_config/json_config.sh@330 -- # killprocess 124425
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@954 -- # '[' -z 124425 ']'
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@958 -- # kill -0 124425
00:09:26.108    04:59:39 json_config -- common/autotest_common.sh@959 -- # uname
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:26.108    04:59:39 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124425
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124425'
00:09:26.108  killing process with pid 124425
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@973 -- # kill 124425
00:09:26.108   04:59:39 json_config -- common/autotest_common.sh@978 -- # wait 124425
00:09:26.367   04:59:40 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:09:26.367   04:59:40 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:09:26.367   04:59:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:26.367   04:59:40 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:26.367   04:59:40 json_config -- json_config/json_config.sh@335 -- # return 0
00:09:26.367   04:59:40 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:09:26.367  INFO: Success
00:09:26.367  ************************************
00:09:26.367  END TEST json_config
00:09:26.367  ************************************
00:09:26.367  
00:09:26.367  real	0m11.718s
00:09:26.367  user	0m17.914s
00:09:26.367  sys	0m2.383s
00:09:26.367   04:59:40 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:26.367   04:59:40 json_config -- common/autotest_common.sh@10 -- # set +x
00:09:26.367   04:59:40  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:26.367   04:59:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:26.367   04:59:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:26.367   04:59:40  -- common/autotest_common.sh@10 -- # set +x
00:09:26.367  ************************************
00:09:26.367  START TEST json_config_extra_key
00:09:26.367  ************************************
00:09:26.367   04:59:40 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:09:26.367    04:59:40 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:26.367     04:59:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version
00:09:26.367     04:59:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:26.626     04:59:40 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:26.626    04:59:40 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:26.626  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.626  		--rc genhtml_branch_coverage=1
00:09:26.626  		--rc genhtml_function_coverage=1
00:09:26.626  		--rc genhtml_legend=1
00:09:26.626  		--rc geninfo_all_blocks=1
00:09:26.626  		--rc geninfo_unexecuted_blocks=1
00:09:26.626  		
00:09:26.626  		'
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:26.626  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.626  		--rc genhtml_branch_coverage=1
00:09:26.626  		--rc genhtml_function_coverage=1
00:09:26.626  		--rc genhtml_legend=1
00:09:26.626  		--rc geninfo_all_blocks=1
00:09:26.626  		--rc geninfo_unexecuted_blocks=1
00:09:26.626  		
00:09:26.626  		'
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:26.626  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.626  		--rc genhtml_branch_coverage=1
00:09:26.626  		--rc genhtml_function_coverage=1
00:09:26.626  		--rc genhtml_legend=1
00:09:26.626  		--rc geninfo_all_blocks=1
00:09:26.626  		--rc geninfo_unexecuted_blocks=1
00:09:26.626  		
00:09:26.626  		'
00:09:26.626    04:59:40 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:26.626  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:26.626  		--rc genhtml_branch_coverage=1
00:09:26.626  		--rc genhtml_function_coverage=1
00:09:26.627  		--rc genhtml_legend=1
00:09:26.627  		--rc geninfo_all_blocks=1
00:09:26.627  		--rc geninfo_unexecuted_blocks=1
00:09:26.627  		
00:09:26.627  		'
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:09:26.627     04:59:40 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]]
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS=
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME
00:09:26.627     04:59:40 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e3cebed2-b3b1-4fc1-bf5a-0d71cea7beba
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e3cebed2-b3b1-4fc1-bf5a-0d71cea7beba
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID")
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect'
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:09:26.627     04:59:40 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob
00:09:26.627     04:59:40 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:09:26.627     04:59:40 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:09:26.627     04:59:40 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:09:26.627      04:59:40 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:26.627      04:59:40 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:26.627      04:59:40 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:26.627      04:59:40 json_config_extra_key -- paths/export.sh@5 -- # export PATH
00:09:26.627      04:59:40 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@51 -- # : 0
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']'
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF)
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}")
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']'
00:09:26.627  /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']'
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']'
00:09:26.627    04:59:40 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:09:26.627  INFO: launching applications...
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:09:26.627   04:59:40 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=124606
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:09:26.627  Waiting for target to run...
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 124606 /var/tmp/spdk_tgt.sock
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 124606 ']'
00:09:26.627   04:59:40 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:26.627  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:26.627   04:59:40 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:26.627  [2024-11-20 04:59:40.435323] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:26.627  [2024-11-20 04:59:40.435676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124606 ]
00:09:27.194  [2024-11-20 04:59:40.912709] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:27.195  [2024-11-20 04:59:40.939537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:27.195  [2024-11-20 04:59:40.970178] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:27.762   04:59:41 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:27.762   04:59:41 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:09:27.762  
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:09:27.762   04:59:41 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:09:27.762  INFO: shutting down applications...
00:09:27.762   04:59:41 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 124606 ]]
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 124606
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124606
00:09:27.762   04:59:41 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 124606
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@43 -- # break
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:09:28.022   04:59:41 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:09:28.022  SPDK target shutdown done
00:09:28.022  Success
00:09:28.022   04:59:41 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:09:28.022  ************************************
00:09:28.022  END TEST json_config_extra_key
00:09:28.022  ************************************
00:09:28.022  
00:09:28.022  real	0m1.719s
00:09:28.022  user	0m1.511s
00:09:28.022  sys	0m0.590s
00:09:28.022   04:59:41 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:28.022   04:59:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:09:28.022   04:59:41  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:28.022   04:59:41  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:28.022   04:59:41  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:28.022   04:59:41  -- common/autotest_common.sh@10 -- # set +x
00:09:28.281  ************************************
00:09:28.281  START TEST alias_rpc
00:09:28.281  ************************************
00:09:28.281   04:59:41 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:09:28.281  * Looking for test storage...
00:09:28.281  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:28.281     04:59:42 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:09:28.281     04:59:42 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@345 -- # : 1
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:28.281     04:59:42 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:28.281    04:59:42 alias_rpc -- scripts/common.sh@368 -- # return 0
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:28.281  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.281  		--rc genhtml_branch_coverage=1
00:09:28.281  		--rc genhtml_function_coverage=1
00:09:28.281  		--rc genhtml_legend=1
00:09:28.281  		--rc geninfo_all_blocks=1
00:09:28.281  		--rc geninfo_unexecuted_blocks=1
00:09:28.281  		
00:09:28.281  		'
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:28.281  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.281  		--rc genhtml_branch_coverage=1
00:09:28.281  		--rc genhtml_function_coverage=1
00:09:28.281  		--rc genhtml_legend=1
00:09:28.281  		--rc geninfo_all_blocks=1
00:09:28.281  		--rc geninfo_unexecuted_blocks=1
00:09:28.281  		
00:09:28.281  		'
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:28.281  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.281  		--rc genhtml_branch_coverage=1
00:09:28.281  		--rc genhtml_function_coverage=1
00:09:28.281  		--rc genhtml_legend=1
00:09:28.281  		--rc geninfo_all_blocks=1
00:09:28.281  		--rc geninfo_unexecuted_blocks=1
00:09:28.281  		
00:09:28.281  		'
00:09:28.281    04:59:42 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:28.281  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:28.281  		--rc genhtml_branch_coverage=1
00:09:28.281  		--rc genhtml_function_coverage=1
00:09:28.281  		--rc genhtml_legend=1
00:09:28.281  		--rc geninfo_all_blocks=1
00:09:28.281  		--rc geninfo_unexecuted_blocks=1
00:09:28.281  		
00:09:28.281  		'
00:09:28.281   04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:09:28.281   04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=124680
00:09:28.281   04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 124680
00:09:28.281   04:59:42 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 124680 ']'
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:28.281  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:28.281   04:59:42 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:28.281  [2024-11-20 04:59:42.227949] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:28.281  [2024-11-20 04:59:42.228267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124680 ]
00:09:28.540  [2024-11-20 04:59:42.378698] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:28.540  [2024-11-20 04:59:42.400757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:28.540  [2024-11-20 04:59:42.439254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:29.476   04:59:43 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:29.476   04:59:43 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:09:29.476   04:59:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:09:29.476   04:59:43 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 124680
00:09:29.476   04:59:43 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 124680 ']'
00:09:29.476   04:59:43 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 124680
00:09:29.476    04:59:43 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:09:29.476   04:59:43 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:29.476    04:59:43 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124680
00:09:29.734   04:59:43 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:29.734   04:59:43 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:29.734  killing process with pid 124680
00:09:29.734   04:59:43 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124680'
00:09:29.734   04:59:43 alias_rpc -- common/autotest_common.sh@973 -- # kill 124680
00:09:29.734   04:59:43 alias_rpc -- common/autotest_common.sh@978 -- # wait 124680
00:09:29.993  
00:09:29.993  real	0m1.866s
00:09:29.993  user	0m2.024s
00:09:29.993  sys	0m0.493s
00:09:29.993   04:59:43 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:29.993   04:59:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:09:29.993  ************************************
00:09:29.993  END TEST alias_rpc
00:09:29.993  ************************************
00:09:29.993   04:59:43  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:09:29.993   04:59:43  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:29.993   04:59:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:29.993   04:59:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:29.993   04:59:43  -- common/autotest_common.sh@10 -- # set +x
00:09:29.993  ************************************
00:09:29.993  START TEST spdkcli_tcp
00:09:29.993  ************************************
00:09:29.993   04:59:43 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:09:30.252  * Looking for test storage...
00:09:30.252  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:09:30.252    04:59:43 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:30.252     04:59:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:30.252     04:59:43 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:30.252     04:59:44 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:30.252    04:59:44 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:30.252  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.252  		--rc genhtml_branch_coverage=1
00:09:30.252  		--rc genhtml_function_coverage=1
00:09:30.252  		--rc genhtml_legend=1
00:09:30.252  		--rc geninfo_all_blocks=1
00:09:30.252  		--rc geninfo_unexecuted_blocks=1
00:09:30.252  		
00:09:30.252  		'
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:30.252  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.252  		--rc genhtml_branch_coverage=1
00:09:30.252  		--rc genhtml_function_coverage=1
00:09:30.252  		--rc genhtml_legend=1
00:09:30.252  		--rc geninfo_all_blocks=1
00:09:30.252  		--rc geninfo_unexecuted_blocks=1
00:09:30.252  		
00:09:30.252  		'
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:30.252  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.252  		--rc genhtml_branch_coverage=1
00:09:30.252  		--rc genhtml_function_coverage=1
00:09:30.252  		--rc genhtml_legend=1
00:09:30.252  		--rc geninfo_all_blocks=1
00:09:30.252  		--rc geninfo_unexecuted_blocks=1
00:09:30.252  		
00:09:30.252  		'
00:09:30.252    04:59:44 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:30.252  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:30.252  		--rc genhtml_branch_coverage=1
00:09:30.252  		--rc genhtml_function_coverage=1
00:09:30.252  		--rc genhtml_legend=1
00:09:30.252  		--rc geninfo_all_blocks=1
00:09:30.252  		--rc geninfo_unexecuted_blocks=1
00:09:30.252  		
00:09:30.252  		'
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:09:30.252    04:59:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:09:30.252    04:59:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=124777
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 124777
00:09:30.252   04:59:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 124777 ']'
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:30.252  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:30.252   04:59:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:30.252  [2024-11-20 04:59:44.167097] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:30.252  [2024-11-20 04:59:44.167406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124777 ]
00:09:30.511  [2024-11-20 04:59:44.327799] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:30.511  [2024-11-20 04:59:44.347509] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:30.511  [2024-11-20 04:59:44.392246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:30.511  [2024-11-20 04:59:44.392246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:31.445   04:59:45 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:31.445   04:59:45 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:09:31.445   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=124799
00:09:31.445   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:09:31.445   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:09:31.445  [
00:09:31.445    "spdk_get_version",
00:09:31.445    "rpc_get_methods",
00:09:31.445    "notify_get_notifications",
00:09:31.445    "notify_get_types",
00:09:31.445    "trace_get_info",
00:09:31.445    "trace_get_tpoint_group_mask",
00:09:31.445    "trace_disable_tpoint_group",
00:09:31.445    "trace_enable_tpoint_group",
00:09:31.445    "trace_clear_tpoint_mask",
00:09:31.445    "trace_set_tpoint_mask",
00:09:31.445    "fsdev_set_opts",
00:09:31.445    "fsdev_get_opts",
00:09:31.445    "framework_get_pci_devices",
00:09:31.445    "framework_get_config",
00:09:31.445    "framework_get_subsystems",
00:09:31.445    "keyring_get_keys",
00:09:31.445    "iobuf_get_stats",
00:09:31.445    "iobuf_set_options",
00:09:31.445    "sock_get_default_impl",
00:09:31.445    "sock_set_default_impl",
00:09:31.445    "sock_impl_set_options",
00:09:31.445    "sock_impl_get_options",
00:09:31.445    "vmd_rescan",
00:09:31.445    "vmd_remove_device",
00:09:31.445    "vmd_enable",
00:09:31.445    "accel_get_stats",
00:09:31.445    "accel_set_options",
00:09:31.445    "accel_set_driver",
00:09:31.445    "accel_crypto_key_destroy",
00:09:31.445    "accel_crypto_keys_get",
00:09:31.445    "accel_crypto_key_create",
00:09:31.445    "accel_assign_opc",
00:09:31.445    "accel_get_module_info",
00:09:31.445    "accel_get_opc_assignments",
00:09:31.445    "bdev_get_histogram",
00:09:31.445    "bdev_enable_histogram",
00:09:31.445    "bdev_set_qos_limit",
00:09:31.445    "bdev_set_qd_sampling_period",
00:09:31.445    "bdev_get_bdevs",
00:09:31.445    "bdev_reset_iostat",
00:09:31.445    "bdev_get_iostat",
00:09:31.445    "bdev_examine",
00:09:31.445    "bdev_wait_for_examine",
00:09:31.445    "bdev_set_options",
00:09:31.445    "scsi_get_devices",
00:09:31.445    "thread_set_cpumask",
00:09:31.445    "scheduler_set_options",
00:09:31.445    "framework_get_governor",
00:09:31.445    "framework_get_scheduler",
00:09:31.445    "framework_set_scheduler",
00:09:31.445    "framework_get_reactors",
00:09:31.445    "thread_get_io_channels",
00:09:31.445    "thread_get_pollers",
00:09:31.445    "thread_get_stats",
00:09:31.445    "framework_monitor_context_switch",
00:09:31.445    "spdk_kill_instance",
00:09:31.445    "log_enable_timestamps",
00:09:31.445    "log_get_flags",
00:09:31.445    "log_clear_flag",
00:09:31.445    "log_set_flag",
00:09:31.445    "log_get_level",
00:09:31.445    "log_set_level",
00:09:31.445    "log_get_print_level",
00:09:31.445    "log_set_print_level",
00:09:31.445    "framework_enable_cpumask_locks",
00:09:31.445    "framework_disable_cpumask_locks",
00:09:31.445    "framework_wait_init",
00:09:31.445    "framework_start_init",
00:09:31.445    "virtio_blk_create_transport",
00:09:31.445    "virtio_blk_get_transports",
00:09:31.445    "vhost_controller_set_coalescing",
00:09:31.445    "vhost_get_controllers",
00:09:31.445    "vhost_delete_controller",
00:09:31.445    "vhost_create_blk_controller",
00:09:31.445    "vhost_scsi_controller_remove_target",
00:09:31.445    "vhost_scsi_controller_add_target",
00:09:31.445    "vhost_start_scsi_controller",
00:09:31.445    "vhost_create_scsi_controller",
00:09:31.445    "nbd_get_disks",
00:09:31.445    "nbd_stop_disk",
00:09:31.445    "nbd_start_disk",
00:09:31.445    "env_dpdk_get_mem_stats",
00:09:31.445    "nvmf_stop_mdns_prr",
00:09:31.445    "nvmf_publish_mdns_prr",
00:09:31.445    "nvmf_subsystem_get_listeners",
00:09:31.445    "nvmf_subsystem_get_qpairs",
00:09:31.446    "nvmf_subsystem_get_controllers",
00:09:31.446    "nvmf_get_stats",
00:09:31.446    "nvmf_get_transports",
00:09:31.446    "nvmf_create_transport",
00:09:31.446    "nvmf_get_targets",
00:09:31.446    "nvmf_delete_target",
00:09:31.446    "nvmf_create_target",
00:09:31.446    "nvmf_subsystem_allow_any_host",
00:09:31.446    "nvmf_subsystem_set_keys",
00:09:31.446    "nvmf_subsystem_remove_host",
00:09:31.446    "nvmf_subsystem_add_host",
00:09:31.446    "nvmf_ns_remove_host",
00:09:31.446    "nvmf_ns_add_host",
00:09:31.446    "nvmf_subsystem_remove_ns",
00:09:31.446    "nvmf_subsystem_set_ns_ana_group",
00:09:31.446    "nvmf_subsystem_add_ns",
00:09:31.446    "nvmf_subsystem_listener_set_ana_state",
00:09:31.446    "nvmf_discovery_get_referrals",
00:09:31.446    "nvmf_discovery_remove_referral",
00:09:31.446    "nvmf_discovery_add_referral",
00:09:31.446    "nvmf_subsystem_remove_listener",
00:09:31.446    "nvmf_subsystem_add_listener",
00:09:31.446    "nvmf_delete_subsystem",
00:09:31.446    "nvmf_create_subsystem",
00:09:31.446    "nvmf_get_subsystems",
00:09:31.446    "nvmf_set_crdt",
00:09:31.446    "nvmf_set_config",
00:09:31.446    "nvmf_set_max_subsystems",
00:09:31.446    "iscsi_get_histogram",
00:09:31.446    "iscsi_enable_histogram",
00:09:31.446    "iscsi_set_options",
00:09:31.446    "iscsi_get_auth_groups",
00:09:31.446    "iscsi_auth_group_remove_secret",
00:09:31.446    "iscsi_auth_group_add_secret",
00:09:31.446    "iscsi_delete_auth_group",
00:09:31.446    "iscsi_create_auth_group",
00:09:31.446    "iscsi_set_discovery_auth",
00:09:31.446    "iscsi_get_options",
00:09:31.446    "iscsi_target_node_request_logout",
00:09:31.446    "iscsi_target_node_set_redirect",
00:09:31.446    "iscsi_target_node_set_auth",
00:09:31.446    "iscsi_target_node_add_lun",
00:09:31.446    "iscsi_get_stats",
00:09:31.446    "iscsi_get_connections",
00:09:31.446    "iscsi_portal_group_set_auth",
00:09:31.446    "iscsi_start_portal_group",
00:09:31.446    "iscsi_delete_portal_group",
00:09:31.446    "iscsi_create_portal_group",
00:09:31.446    "iscsi_get_portal_groups",
00:09:31.446    "iscsi_delete_target_node",
00:09:31.446    "iscsi_target_node_remove_pg_ig_maps",
00:09:31.446    "iscsi_target_node_add_pg_ig_maps",
00:09:31.446    "iscsi_create_target_node",
00:09:31.446    "iscsi_get_target_nodes",
00:09:31.446    "iscsi_delete_initiator_group",
00:09:31.446    "iscsi_initiator_group_remove_initiators",
00:09:31.446    "iscsi_initiator_group_add_initiators",
00:09:31.446    "iscsi_create_initiator_group",
00:09:31.446    "iscsi_get_initiator_groups",
00:09:31.446    "fsdev_aio_delete",
00:09:31.446    "fsdev_aio_create",
00:09:31.446    "keyring_linux_set_options",
00:09:31.446    "keyring_file_remove_key",
00:09:31.446    "keyring_file_add_key",
00:09:31.446    "iaa_scan_accel_module",
00:09:31.446    "dsa_scan_accel_module",
00:09:31.446    "ioat_scan_accel_module",
00:09:31.446    "accel_error_inject_error",
00:09:31.446    "bdev_iscsi_delete",
00:09:31.446    "bdev_iscsi_create",
00:09:31.446    "bdev_iscsi_set_options",
00:09:31.446    "bdev_virtio_attach_controller",
00:09:31.446    "bdev_virtio_scsi_get_devices",
00:09:31.446    "bdev_virtio_detach_controller",
00:09:31.446    "bdev_virtio_blk_set_hotplug",
00:09:31.446    "bdev_ftl_set_property",
00:09:31.446    "bdev_ftl_get_properties",
00:09:31.446    "bdev_ftl_get_stats",
00:09:31.446    "bdev_ftl_unmap",
00:09:31.446    "bdev_ftl_unload",
00:09:31.446    "bdev_ftl_delete",
00:09:31.446    "bdev_ftl_load",
00:09:31.446    "bdev_ftl_create",
00:09:31.446    "bdev_aio_delete",
00:09:31.446    "bdev_aio_rescan",
00:09:31.446    "bdev_aio_create",
00:09:31.446    "blobfs_create",
00:09:31.446    "blobfs_detect",
00:09:31.446    "blobfs_set_cache_size",
00:09:31.446    "bdev_zone_block_delete",
00:09:31.446    "bdev_zone_block_create",
00:09:31.446    "bdev_delay_delete",
00:09:31.446    "bdev_delay_create",
00:09:31.446    "bdev_delay_update_latency",
00:09:31.446    "bdev_split_delete",
00:09:31.446    "bdev_split_create",
00:09:31.446    "bdev_error_inject_error",
00:09:31.446    "bdev_error_delete",
00:09:31.446    "bdev_error_create",
00:09:31.446    "bdev_raid_set_options",
00:09:31.446    "bdev_raid_remove_base_bdev",
00:09:31.446    "bdev_raid_add_base_bdev",
00:09:31.446    "bdev_raid_delete",
00:09:31.446    "bdev_raid_create",
00:09:31.446    "bdev_raid_get_bdevs",
00:09:31.446    "bdev_lvol_set_parent_bdev",
00:09:31.446    "bdev_lvol_set_parent",
00:09:31.446    "bdev_lvol_check_shallow_copy",
00:09:31.446    "bdev_lvol_start_shallow_copy",
00:09:31.446    "bdev_lvol_grow_lvstore",
00:09:31.446    "bdev_lvol_get_lvols",
00:09:31.446    "bdev_lvol_get_lvstores",
00:09:31.446    "bdev_lvol_delete",
00:09:31.446    "bdev_lvol_set_read_only",
00:09:31.446    "bdev_lvol_resize",
00:09:31.446    "bdev_lvol_decouple_parent",
00:09:31.446    "bdev_lvol_inflate",
00:09:31.446    "bdev_lvol_rename",
00:09:31.446    "bdev_lvol_clone_bdev",
00:09:31.446    "bdev_lvol_clone",
00:09:31.446    "bdev_lvol_snapshot",
00:09:31.446    "bdev_lvol_create",
00:09:31.446    "bdev_lvol_delete_lvstore",
00:09:31.446    "bdev_lvol_rename_lvstore",
00:09:31.446    "bdev_lvol_create_lvstore",
00:09:31.446    "bdev_passthru_delete",
00:09:31.446    "bdev_passthru_create",
00:09:31.446    "bdev_nvme_cuse_unregister",
00:09:31.446    "bdev_nvme_cuse_register",
00:09:31.446    "bdev_opal_new_user",
00:09:31.446    "bdev_opal_set_lock_state",
00:09:31.446    "bdev_opal_delete",
00:09:31.446    "bdev_opal_get_info",
00:09:31.446    "bdev_opal_create",
00:09:31.446    "bdev_nvme_opal_revert",
00:09:31.446    "bdev_nvme_opal_init",
00:09:31.446    "bdev_nvme_send_cmd",
00:09:31.446    "bdev_nvme_set_keys",
00:09:31.446    "bdev_nvme_get_path_iostat",
00:09:31.446    "bdev_nvme_get_mdns_discovery_info",
00:09:31.446    "bdev_nvme_stop_mdns_discovery",
00:09:31.446    "bdev_nvme_start_mdns_discovery",
00:09:31.446    "bdev_nvme_set_multipath_policy",
00:09:31.446    "bdev_nvme_set_preferred_path",
00:09:31.446    "bdev_nvme_get_io_paths",
00:09:31.446    "bdev_nvme_remove_error_injection",
00:09:31.446    "bdev_nvme_add_error_injection",
00:09:31.446    "bdev_nvme_get_discovery_info",
00:09:31.446    "bdev_nvme_stop_discovery",
00:09:31.446    "bdev_nvme_start_discovery",
00:09:31.446    "bdev_nvme_get_controller_health_info",
00:09:31.446    "bdev_nvme_disable_controller",
00:09:31.446    "bdev_nvme_enable_controller",
00:09:31.446    "bdev_nvme_reset_controller",
00:09:31.446    "bdev_nvme_get_transport_statistics",
00:09:31.446    "bdev_nvme_apply_firmware",
00:09:31.446    "bdev_nvme_detach_controller",
00:09:31.446    "bdev_nvme_get_controllers",
00:09:31.446    "bdev_nvme_attach_controller",
00:09:31.446    "bdev_nvme_set_hotplug",
00:09:31.446    "bdev_nvme_set_options",
00:09:31.446    "bdev_null_resize",
00:09:31.446    "bdev_null_delete",
00:09:31.446    "bdev_null_create",
00:09:31.446    "bdev_malloc_delete",
00:09:31.446    "bdev_malloc_create"
00:09:31.446  ]
00:09:31.446   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:09:31.446   04:59:45 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:09:31.446   04:59:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:31.446   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:09:31.446   04:59:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 124777
00:09:31.446   04:59:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 124777 ']'
00:09:31.446   04:59:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 124777
00:09:31.446    04:59:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:09:31.446   04:59:45 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:31.446    04:59:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124777
00:09:31.705   04:59:45 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:31.705   04:59:45 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:31.705   04:59:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124777'
00:09:31.705  killing process with pid 124777
00:09:31.705   04:59:45 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 124777
00:09:31.705   04:59:45 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 124777
00:09:31.963  
00:09:31.963  real	0m1.929s
00:09:31.963  user	0m3.476s
00:09:31.963  sys	0m0.506s
00:09:31.963   04:59:45 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:31.963  ************************************
00:09:31.963  END TEST spdkcli_tcp
00:09:31.963   04:59:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:09:31.963  ************************************
00:09:31.963   04:59:45  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:31.963   04:59:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:31.963   04:59:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:31.963   04:59:45  -- common/autotest_common.sh@10 -- # set +x
00:09:31.963  ************************************
00:09:31.963  START TEST dpdk_mem_utility
00:09:31.963  ************************************
00:09:31.963   04:59:45 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:09:32.222  * Looking for test storage...
00:09:32.222  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:09:32.222    04:59:45 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:32.222     04:59:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:32.222     04:59:45 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:32.222     04:59:46 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:32.222    04:59:46 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:32.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:32.222  		--rc genhtml_branch_coverage=1
00:09:32.222  		--rc genhtml_function_coverage=1
00:09:32.222  		--rc genhtml_legend=1
00:09:32.222  		--rc geninfo_all_blocks=1
00:09:32.222  		--rc geninfo_unexecuted_blocks=1
00:09:32.222  		
00:09:32.222  		'
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:32.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:32.222  		--rc genhtml_branch_coverage=1
00:09:32.222  		--rc genhtml_function_coverage=1
00:09:32.222  		--rc genhtml_legend=1
00:09:32.222  		--rc geninfo_all_blocks=1
00:09:32.222  		--rc geninfo_unexecuted_blocks=1
00:09:32.222  		
00:09:32.222  		'
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:32.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:32.222  		--rc genhtml_branch_coverage=1
00:09:32.222  		--rc genhtml_function_coverage=1
00:09:32.222  		--rc genhtml_legend=1
00:09:32.222  		--rc geninfo_all_blocks=1
00:09:32.222  		--rc geninfo_unexecuted_blocks=1
00:09:32.222  		
00:09:32.222  		'
00:09:32.222    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:32.222  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:32.222  		--rc genhtml_branch_coverage=1
00:09:32.222  		--rc genhtml_function_coverage=1
00:09:32.222  		--rc genhtml_legend=1
00:09:32.222  		--rc geninfo_all_blocks=1
00:09:32.222  		--rc geninfo_unexecuted_blocks=1
00:09:32.222  		
00:09:32.222  		'
00:09:32.222   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:32.222   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=124887
00:09:32.222   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 124887
00:09:32.222   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 124887 ']'
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:32.222  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:32.222   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:32.222  [2024-11-20 04:59:46.119916] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:32.223  [2024-11-20 04:59:46.120098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124887 ]
00:09:32.481  [2024-11-20 04:59:46.255296] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:32.481  [2024-11-20 04:59:46.282293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:32.481  [2024-11-20 04:59:46.312152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:32.740   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:32.740   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:09:32.740   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:09:32.740   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:09:32.740   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:32.740   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:32.740  {
00:09:32.740  "filename": "/tmp/spdk_mem_dump.txt"
00:09:32.740  }
00:09:32.740   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:32.740   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:09:32.740  DPDK memory size 810.000000 MiB in 1 heap(s)
00:09:32.740  1 heaps totaling size 810.000000 MiB
00:09:32.740    size:  810.000000 MiB heap id: 0
00:09:32.740  end heaps----------
00:09:32.740  9 mempools totaling size 595.772034 MiB
00:09:32.740    size:  212.674988 MiB name: PDU_immediate_data_Pool
00:09:32.740    size:  158.602051 MiB name: PDU_data_out_Pool
00:09:32.740    size:   92.545471 MiB name: bdev_io_124887
00:09:32.740    size:   50.003479 MiB name: msgpool_124887
00:09:32.740    size:   36.509338 MiB name: fsdev_io_124887
00:09:32.740    size:   21.763794 MiB name: PDU_Pool
00:09:32.740    size:   19.513306 MiB name: SCSI_TASK_Pool
00:09:32.740    size:    4.133484 MiB name: evtpool_124887
00:09:32.740    size:    0.026123 MiB name: Session_Pool
00:09:32.740  end mempools-------
00:09:32.740  6 memzones totaling size 4.142822 MiB
00:09:32.740    size:    1.000366 MiB name: RG_ring_0_124887
00:09:32.740    size:    1.000366 MiB name: RG_ring_1_124887
00:09:32.740    size:    1.000366 MiB name: RG_ring_4_124887
00:09:32.740    size:    1.000366 MiB name: RG_ring_5_124887
00:09:32.740    size:    0.125366 MiB name: RG_ring_2_124887
00:09:32.740    size:    0.015991 MiB name: RG_ring_3_124887
00:09:32.740  end memzones-------
00:09:32.740   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:09:33.000  heap id: 0 total size: 810.000000 MiB number of busy elements: 223 number of free elements: 15
00:09:33.000    list of free elements. size: 10.970276 MiB
00:09:33.000      element at address: 0x200018a00000 with size:    0.999878 MiB
00:09:33.000      element at address: 0x200018c00000 with size:    0.999878 MiB
00:09:33.000      element at address: 0x200000400000 with size:    0.996155 MiB
00:09:33.000      element at address: 0x200031800000 with size:    0.994446 MiB
00:09:33.000      element at address: 0x200008000000 with size:    0.959839 MiB
00:09:33.000      element at address: 0x200012c00000 with size:    0.954285 MiB
00:09:33.000      element at address: 0x200018e00000 with size:    0.936584 MiB
00:09:33.000      element at address: 0x200000200000 with size:    0.858093 MiB
00:09:33.000      element at address: 0x20001a600000 with size:    0.568970 MiB
00:09:33.000      element at address: 0x200000c00000 with size:    0.490845 MiB
00:09:33.000      element at address: 0x200003e00000 with size:    0.489624 MiB
00:09:33.000      element at address: 0x200019000000 with size:    0.485657 MiB
00:09:33.000      element at address: 0x200010600000 with size:    0.481018 MiB
00:09:33.000      element at address: 0x200027a00000 with size:    0.401794 MiB
00:09:33.000      element at address: 0x200000800000 with size:    0.353210 MiB
00:09:33.000    list of standard malloc elements. size: 199.110840 MiB
00:09:33.000      element at address: 0x2000081fff80 with size:  132.000122 MiB
00:09:33.000      element at address: 0x200003ffff80 with size:   64.000122 MiB
00:09:33.000      element at address: 0x200018afff80 with size:    1.000122 MiB
00:09:33.000      element at address: 0x200018cfff80 with size:    1.000122 MiB
00:09:33.000      element at address: 0x200018efff80 with size:    1.000122 MiB
00:09:33.000      element at address: 0x200018eeff00 with size:    0.062622 MiB
00:09:33.000      element at address: 0x2000003fdf80 with size:    0.007935 MiB
00:09:33.000      element at address: 0x200018eefdc0 with size:    0.000305 MiB
00:09:33.000      element at address: 0x2000002fbcc0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000003fdec0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff040 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff100 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff1c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff280 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff340 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff400 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff4c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff580 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff640 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff700 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff7c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff880 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ff940 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ffa00 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ffac0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ffcc0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ffd80 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000004ffe40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085a6c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085a8c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085a980 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085aa40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085ab00 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085abc0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085ac80 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085ad40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085ae00 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085aec0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085af80 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085b040 with size:    0.000183 MiB
00:09:33.000      element at address: 0x20000085b100 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000008db3c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000008db5c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000008df880 with size:    0.000183 MiB
00:09:33.000      element at address: 0x2000008ffb40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7da80 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7db40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7dc00 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7dcc0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7dd80 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7de40 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7df00 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7dfc0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e080 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e140 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e200 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e2c0 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e380 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e440 with size:    0.000183 MiB
00:09:33.000      element at address: 0x200000c7e500 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e5c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e680 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e740 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e800 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e8c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7e980 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7ea40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7eb00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7ebc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7ec80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000c7ed40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000cff000 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200000cff0c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d580 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d640 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d700 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d7c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d880 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7d940 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7da00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003e7dac0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200003efdd80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x2000080fdd80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b240 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b300 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b3c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b480 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b540 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b600 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001067b6c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x2000106fb980 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200012cf44c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200018eefc40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200018eefd00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x2000190bc740 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691a80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691b40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691c00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691cc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691d80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691e40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691f00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a691fc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692080 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692140 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692200 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6922c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692380 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692440 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692500 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6925c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692680 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692740 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692800 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6928c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692980 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692a40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692b00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692bc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692c80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692d40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692e00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692ec0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a692f80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693040 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693100 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6931c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693280 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693340 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693400 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6934c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693580 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693640 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693700 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6937c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693880 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693940 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693a00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693ac0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693b80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693c40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693d00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693dc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693e80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a693f40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694000 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6940c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694180 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694240 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694300 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6943c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694480 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694540 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694600 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6946c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694780 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694840 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694900 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6949c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694a80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694b40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694c00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694cc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694d80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694e40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694f00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a694fc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a695080 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a695140 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a695200 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a6952c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a695380 with size:    0.000183 MiB
00:09:33.001      element at address: 0x20001a695440 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a66dc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a66e80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6da80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6dc80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6dd40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6de00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6dec0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6df80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e040 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e100 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e1c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e280 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e340 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e400 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e4c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e580 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e640 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e700 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e7c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e880 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6e940 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6ea00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6eac0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6eb80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6ec40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6ed00 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6edc0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6ee80 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6ef40 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f000 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f0c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f180 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f240 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f300 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f3c0 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f480 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f540 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f600 with size:    0.000183 MiB
00:09:33.001      element at address: 0x200027a6f6c0 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6f780 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6f840 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6f900 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6f9c0 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fa80 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fb40 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fc00 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fcc0 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fd80 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6fe40 with size:    0.000183 MiB
00:09:33.002      element at address: 0x200027a6ff00 with size:    0.000183 MiB
00:09:33.002    list of memzone associated elements. size: 599.918884 MiB
00:09:33.002      element at address: 0x20001a695500 with size:  211.416748 MiB
00:09:33.002        associated memzone info: size:  211.416626 MiB name: MP_PDU_immediate_data_Pool_0
00:09:33.002      element at address: 0x200027a6ffc0 with size:  157.562561 MiB
00:09:33.002        associated memzone info: size:  157.562439 MiB name: MP_PDU_data_out_Pool_0
00:09:33.002      element at address: 0x200012df4780 with size:   92.045044 MiB
00:09:33.002        associated memzone info: size:   92.044922 MiB name: MP_bdev_io_124887_0
00:09:33.002      element at address: 0x200000dff380 with size:   48.003052 MiB
00:09:33.002        associated memzone info: size:   48.002930 MiB name: MP_msgpool_124887_0
00:09:33.002      element at address: 0x2000107fdb80 with size:   36.008911 MiB
00:09:33.002        associated memzone info: size:   36.008789 MiB name: MP_fsdev_io_124887_0
00:09:33.002      element at address: 0x2000191be940 with size:   20.255554 MiB
00:09:33.002        associated memzone info: size:   20.255432 MiB name: MP_PDU_Pool_0
00:09:33.002      element at address: 0x2000319feb40 with size:   18.005066 MiB
00:09:33.002        associated memzone info: size:   18.004944 MiB name: MP_SCSI_TASK_Pool_0
00:09:33.002      element at address: 0x2000004fff00 with size:    3.000244 MiB
00:09:33.002        associated memzone info: size:    3.000122 MiB name: MP_evtpool_124887_0
00:09:33.002      element at address: 0x2000009ffe00 with size:    2.000488 MiB
00:09:33.002        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_124887
00:09:33.002      element at address: 0x2000002fbd80 with size:    1.008118 MiB
00:09:33.002        associated memzone info: size:    1.007996 MiB name: MP_evtpool_124887
00:09:33.002      element at address: 0x2000106fba40 with size:    1.008118 MiB
00:09:33.002        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:09:33.002      element at address: 0x2000190bc800 with size:    1.008118 MiB
00:09:33.002        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:09:33.002      element at address: 0x2000080fde40 with size:    1.008118 MiB
00:09:33.002        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:09:33.002      element at address: 0x200003efde40 with size:    1.008118 MiB
00:09:33.002        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:09:33.002      element at address: 0x200000cff180 with size:    1.000488 MiB
00:09:33.002        associated memzone info: size:    1.000366 MiB name: RG_ring_0_124887
00:09:33.002      element at address: 0x2000008ffc00 with size:    1.000488 MiB
00:09:33.002        associated memzone info: size:    1.000366 MiB name: RG_ring_1_124887
00:09:33.002      element at address: 0x200012cf4580 with size:    1.000488 MiB
00:09:33.002        associated memzone info: size:    1.000366 MiB name: RG_ring_4_124887
00:09:33.002      element at address: 0x2000318fe940 with size:    1.000488 MiB
00:09:33.002        associated memzone info: size:    1.000366 MiB name: RG_ring_5_124887
00:09:33.002      element at address: 0x20000085b1c0 with size:    0.500488 MiB
00:09:33.002        associated memzone info: size:    0.500366 MiB name: RG_MP_fsdev_io_124887
00:09:33.002      element at address: 0x200000c7ee00 with size:    0.500488 MiB
00:09:33.002        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_124887
00:09:33.002      element at address: 0x20001067b780 with size:    0.500488 MiB
00:09:33.002        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:09:33.002      element at address: 0x200003e7db80 with size:    0.500488 MiB
00:09:33.002        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:09:33.002      element at address: 0x20001907c540 with size:    0.250488 MiB
00:09:33.002        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:09:33.002      element at address: 0x2000002dbac0 with size:    0.125488 MiB
00:09:33.002        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_124887
00:09:33.002      element at address: 0x2000008df940 with size:    0.125488 MiB
00:09:33.002        associated memzone info: size:    0.125366 MiB name: RG_ring_2_124887
00:09:33.002      element at address: 0x2000080f5b80 with size:    0.031738 MiB
00:09:33.002        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:09:33.002      element at address: 0x200027a66f40 with size:    0.023743 MiB
00:09:33.002        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:09:33.002      element at address: 0x2000008db680 with size:    0.016113 MiB
00:09:33.002        associated memzone info: size:    0.015991 MiB name: RG_ring_3_124887
00:09:33.002      element at address: 0x200027a6d080 with size:    0.002441 MiB
00:09:33.002        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:09:33.002      element at address: 0x2000004ffb80 with size:    0.000305 MiB
00:09:33.002        associated memzone info: size:    0.000183 MiB name: MP_msgpool_124887
00:09:33.002      element at address: 0x2000008db480 with size:    0.000305 MiB
00:09:33.002        associated memzone info: size:    0.000183 MiB name: MP_fsdev_io_124887
00:09:33.002      element at address: 0x20000085a780 with size:    0.000305 MiB
00:09:33.002        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_124887
00:09:33.002      element at address: 0x200027a6db40 with size:    0.000305 MiB
00:09:33.002        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:09:33.002   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:09:33.002   04:59:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 124887
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 124887 ']'
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 124887
00:09:33.002    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:33.002    04:59:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 124887
00:09:33.002  killing process with pid 124887
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 124887'
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 124887
00:09:33.002   04:59:46 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 124887
00:09:33.262  
00:09:33.262  real	0m1.241s
00:09:33.262  user	0m1.136s
00:09:33.262  sys	0m0.448s
00:09:33.262  ************************************
00:09:33.262  END TEST dpdk_mem_utility
00:09:33.262  ************************************
00:09:33.262   04:59:47 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:33.262   04:59:47 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:09:33.262   04:59:47  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:33.262   04:59:47  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:33.262   04:59:47  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:33.262   04:59:47  -- common/autotest_common.sh@10 -- # set +x
00:09:33.262  ************************************
00:09:33.262  START TEST event
00:09:33.262  ************************************
00:09:33.262   04:59:47 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:09:33.521  * Looking for test storage...
00:09:33.521  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:33.521     04:59:47 event -- common/autotest_common.sh@1693 -- # lcov --version
00:09:33.521     04:59:47 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:33.521    04:59:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:33.521    04:59:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:33.521    04:59:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:33.521    04:59:47 event -- scripts/common.sh@336 -- # IFS=.-:
00:09:33.521    04:59:47 event -- scripts/common.sh@336 -- # read -ra ver1
00:09:33.521    04:59:47 event -- scripts/common.sh@337 -- # IFS=.-:
00:09:33.521    04:59:47 event -- scripts/common.sh@337 -- # read -ra ver2
00:09:33.521    04:59:47 event -- scripts/common.sh@338 -- # local 'op=<'
00:09:33.521    04:59:47 event -- scripts/common.sh@340 -- # ver1_l=2
00:09:33.521    04:59:47 event -- scripts/common.sh@341 -- # ver2_l=1
00:09:33.521    04:59:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:33.521    04:59:47 event -- scripts/common.sh@344 -- # case "$op" in
00:09:33.521    04:59:47 event -- scripts/common.sh@345 -- # : 1
00:09:33.521    04:59:47 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:33.521    04:59:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:33.521     04:59:47 event -- scripts/common.sh@365 -- # decimal 1
00:09:33.521     04:59:47 event -- scripts/common.sh@353 -- # local d=1
00:09:33.521     04:59:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:33.521     04:59:47 event -- scripts/common.sh@355 -- # echo 1
00:09:33.521    04:59:47 event -- scripts/common.sh@365 -- # ver1[v]=1
00:09:33.521     04:59:47 event -- scripts/common.sh@366 -- # decimal 2
00:09:33.521     04:59:47 event -- scripts/common.sh@353 -- # local d=2
00:09:33.521     04:59:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:33.521     04:59:47 event -- scripts/common.sh@355 -- # echo 2
00:09:33.521    04:59:47 event -- scripts/common.sh@366 -- # ver2[v]=2
00:09:33.521    04:59:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:33.521    04:59:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:33.521    04:59:47 event -- scripts/common.sh@368 -- # return 0
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:33.521  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:33.521  		--rc genhtml_branch_coverage=1
00:09:33.521  		--rc genhtml_function_coverage=1
00:09:33.521  		--rc genhtml_legend=1
00:09:33.521  		--rc geninfo_all_blocks=1
00:09:33.521  		--rc geninfo_unexecuted_blocks=1
00:09:33.521  		
00:09:33.521  		'
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:33.521  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:33.521  		--rc genhtml_branch_coverage=1
00:09:33.521  		--rc genhtml_function_coverage=1
00:09:33.521  		--rc genhtml_legend=1
00:09:33.521  		--rc geninfo_all_blocks=1
00:09:33.521  		--rc geninfo_unexecuted_blocks=1
00:09:33.521  		
00:09:33.521  		'
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:33.521  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:33.521  		--rc genhtml_branch_coverage=1
00:09:33.521  		--rc genhtml_function_coverage=1
00:09:33.521  		--rc genhtml_legend=1
00:09:33.521  		--rc geninfo_all_blocks=1
00:09:33.521  		--rc geninfo_unexecuted_blocks=1
00:09:33.521  		
00:09:33.521  		'
00:09:33.521    04:59:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:33.521  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:33.521  		--rc genhtml_branch_coverage=1
00:09:33.521  		--rc genhtml_function_coverage=1
00:09:33.521  		--rc genhtml_legend=1
00:09:33.521  		--rc geninfo_all_blocks=1
00:09:33.521  		--rc geninfo_unexecuted_blocks=1
00:09:33.521  		
00:09:33.521  		'
00:09:33.521   04:59:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:09:33.521    04:59:47 event -- bdev/nbd_common.sh@6 -- # set -e
00:09:33.521   04:59:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:33.521   04:59:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:09:33.521   04:59:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:33.521   04:59:47 event -- common/autotest_common.sh@10 -- # set +x
00:09:33.521  ************************************
00:09:33.521  START TEST event_perf
00:09:33.521  ************************************
00:09:33.521   04:59:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:09:33.521  Running I/O for 1 seconds...[2024-11-20 04:59:47.401580] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:33.521  [2024-11-20 04:59:47.402095] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid124971 ]
00:09:33.780  [2024-11-20 04:59:47.577512] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:33.780  [2024-11-20 04:59:47.596540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:33.780  [2024-11-20 04:59:47.634176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:33.780  [2024-11-20 04:59:47.634318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:33.780  [2024-11-20 04:59:47.634446] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:33.780  [2024-11-20 04:59:47.634655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:34.716  Running I/O for 1 seconds...
00:09:34.716  lcore  0:   124035
00:09:34.716  lcore  1:   124035
00:09:34.716  lcore  2:   124037
00:09:34.716  lcore  3:   124034
00:09:34.974  done.
00:09:34.975  ************************************
00:09:34.975  END TEST event_perf
00:09:34.975  ************************************
00:09:34.975  
00:09:34.975  real	0m1.338s
00:09:34.975  user	0m4.118s
00:09:34.975  sys	0m0.113s
00:09:34.975   04:59:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:34.975   04:59:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:09:34.975   04:59:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:34.975   04:59:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:34.975   04:59:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:34.975   04:59:48 event -- common/autotest_common.sh@10 -- # set +x
00:09:34.975  ************************************
00:09:34.975  START TEST event_reactor
00:09:34.975  ************************************
00:09:34.975   04:59:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:09:34.975  [2024-11-20 04:59:48.796095] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:34.975  [2024-11-20 04:59:48.796907] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125019 ]
00:09:35.234  [2024-11-20 04:59:48.950495] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:35.234  [2024-11-20 04:59:48.975856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:35.234  [2024-11-20 04:59:49.010938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:36.169  test_start
00:09:36.169  oneshot
00:09:36.169  tick 100
00:09:36.169  tick 100
00:09:36.169  tick 250
00:09:36.169  tick 100
00:09:36.169  tick 100
00:09:36.169  tick 100
00:09:36.169  tick 250
00:09:36.169  tick 500
00:09:36.169  tick 100
00:09:36.169  tick 100
00:09:36.169  tick 250
00:09:36.169  tick 100
00:09:36.169  tick 100
00:09:36.169  test_end
00:09:36.169  ************************************
00:09:36.169  END TEST event_reactor
00:09:36.169  ************************************
00:09:36.169  
00:09:36.169  real	0m1.324s
00:09:36.169  user	0m1.117s
00:09:36.169  sys	0m0.105s
00:09:36.169   04:59:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:36.169   04:59:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:09:36.428   04:59:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:36.428   04:59:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:09:36.428   04:59:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:36.428   04:59:50 event -- common/autotest_common.sh@10 -- # set +x
00:09:36.428  ************************************
00:09:36.428  START TEST event_reactor_perf
00:09:36.428  ************************************
00:09:36.428   04:59:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:09:36.428  [2024-11-20 04:59:50.170579] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:36.428  [2024-11-20 04:59:50.170925] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125062 ]
00:09:36.428  [2024-11-20 04:59:50.304947] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:36.428  [2024-11-20 04:59:50.330918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:09:36.428  [2024-11-20 04:59:50.359767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:37.812  test_start
00:09:37.812  test_end
00:09:37.812  Performance:   412566 events per second
00:09:37.812  ************************************
00:09:37.812  END TEST event_reactor_perf
00:09:37.812  ************************************
00:09:37.812  
00:09:37.812  real	0m1.286s
00:09:37.812  user	0m1.106s
00:09:37.812  sys	0m0.077s
00:09:37.812   04:59:51 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:37.812   04:59:51 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:09:37.812    04:59:51 event -- event/event.sh@49 -- # uname -s
00:09:37.813   04:59:51 event -- event/event.sh@49 -- # '[' Linux = Linux ']'
00:09:37.813   04:59:51 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:37.813   04:59:51 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:37.813   04:59:51 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:37.813   04:59:51 event -- common/autotest_common.sh@10 -- # set +x
00:09:37.813  ************************************
00:09:37.813  START TEST event_scheduler
00:09:37.813  ************************************
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh
00:09:37.813  * Looking for test storage...
00:09:37.813  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:09:37.813     04:59:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version
00:09:37.813     04:59:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-:
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-:
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<'
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@345 -- # : 1
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 ))
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=1
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 1
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@353 -- # local d=2
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:09:37.813     04:59:51 event.event_scheduler -- scripts/common.sh@355 -- # echo 2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:09:37.813    04:59:51 event.event_scheduler -- scripts/common.sh@368 -- # return 0
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:09:37.813  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.813  		--rc genhtml_branch_coverage=1
00:09:37.813  		--rc genhtml_function_coverage=1
00:09:37.813  		--rc genhtml_legend=1
00:09:37.813  		--rc geninfo_all_blocks=1
00:09:37.813  		--rc geninfo_unexecuted_blocks=1
00:09:37.813  		
00:09:37.813  		'
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:09:37.813  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.813  		--rc genhtml_branch_coverage=1
00:09:37.813  		--rc genhtml_function_coverage=1
00:09:37.813  		--rc genhtml_legend=1
00:09:37.813  		--rc geninfo_all_blocks=1
00:09:37.813  		--rc geninfo_unexecuted_blocks=1
00:09:37.813  		
00:09:37.813  		'
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:09:37.813  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.813  		--rc genhtml_branch_coverage=1
00:09:37.813  		--rc genhtml_function_coverage=1
00:09:37.813  		--rc genhtml_legend=1
00:09:37.813  		--rc geninfo_all_blocks=1
00:09:37.813  		--rc geninfo_unexecuted_blocks=1
00:09:37.813  		
00:09:37.813  		'
00:09:37.813    04:59:51 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:09:37.813  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:09:37.813  		--rc genhtml_branch_coverage=1
00:09:37.813  		--rc genhtml_function_coverage=1
00:09:37.813  		--rc genhtml_legend=1
00:09:37.813  		--rc geninfo_all_blocks=1
00:09:37.813  		--rc geninfo_unexecuted_blocks=1
00:09:37.813  		
00:09:37.813  		'
00:09:37.813   04:59:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd
00:09:37.813   04:59:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=125145
00:09:37.813   04:59:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f
00:09:37.813   04:59:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT
00:09:37.813   04:59:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 125145
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 125145 ']'
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:09:37.813  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:37.813   04:59:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:37.813  [2024-11-20 04:59:51.750330] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:37.813  [2024-11-20 04:59:51.750862] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125145 ]
00:09:38.072  [2024-11-20 04:59:51.926552] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:38.072  [2024-11-20 04:59:51.957062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:09:38.072  [2024-11-20 04:59:52.022034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:38.072  [2024-11-20 04:59:52.022184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:38.072  [2024-11-20 04:59:52.022319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:09:38.072  [2024-11-20 04:59:52.022413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0
00:09:39.010   04:59:52 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  POWER: acpi-cpufreq driver is not supported
00:09:39.010  POWER: amd-pstate driver is not supported
00:09:39.010  POWER: cppc_cpufreq driver is not supported
00:09:39.010  POWER: intel_pstate driver is not supported
00:09:39.010  GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0
00:09:39.010  GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory
00:09:39.010  POWER: Unable to set Power Management Environment for lcore 0
00:09:39.010  [2024-11-20 04:59:52.697772] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0
00:09:39.010  [2024-11-20 04:59:52.697880] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0
00:09:39.010  [2024-11-20 04:59:52.698019] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor
00:09:39.010  [2024-11-20 04:59:52.698246] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20
00:09:39.010  [2024-11-20 04:59:52.698425] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80
00:09:39.010  [2024-11-20 04:59:52.698614] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  [2024-11-20 04:59:52.786551] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started.
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  ************************************
00:09:39.010  START TEST scheduler_create_thread
00:09:39.010  ************************************
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  2
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  3
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  4
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  5
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  6
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  7
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  8
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  9
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010  10
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:39.010   04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:39.010    04:59:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:40.913    04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:40.913   04:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12
00:09:40.913   04:59:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12
00:09:40.913   04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable
00:09:40.913   04:59:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:41.481  ************************************
00:09:41.481  END TEST scheduler_create_thread
00:09:41.481  ************************************
00:09:41.481   04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:09:41.481  
00:09:41.481  real	0m2.612s
00:09:41.481  user	0m0.014s
00:09:41.481  sys	0m0.004s
00:09:41.481   04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:41.481   04:59:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x
00:09:41.740   04:59:55 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:09:41.740   04:59:55 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 125145
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 125145 ']'
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 125145
00:09:41.740    04:59:55 event.event_scheduler -- common/autotest_common.sh@959 -- # uname
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:09:41.740    04:59:55 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125145
00:09:41.740  killing process with pid 125145
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125145'
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 125145
00:09:41.740   04:59:55 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 125145
00:09:41.998  [2024-11-20 04:59:55.894537] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped.
00:09:42.258  
00:09:42.258  real	0m4.647s
00:09:42.258  user	0m8.469s
00:09:42.258  sys	0m0.443s
00:09:42.258   04:59:56 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable
00:09:42.258   04:59:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x
00:09:42.258  ************************************
00:09:42.258  END TEST event_scheduler
00:09:42.258  ************************************
00:09:42.258   04:59:56 event -- event/event.sh@51 -- # modprobe -n nbd
00:09:42.258   04:59:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test
00:09:42.258   04:59:56 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:09:42.258   04:59:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:09:42.258   04:59:56 event -- common/autotest_common.sh@10 -- # set +x
00:09:42.258  ************************************
00:09:42.258  START TEST app_repeat
00:09:42.258  ************************************
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=125256
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT
00:09:42.258  Process app_repeat pid: 125256
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 125256'
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:42.258  spdk_app_start Round 0
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0'
00:09:42.258   04:59:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125256 /var/tmp/spdk-nbd.sock
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 125256 ']'
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:42.258  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:42.258   04:59:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:42.517  [2024-11-20 04:59:56.243357] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:09:42.517  [2024-11-20 04:59:56.243846] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125256 ]
00:09:42.517  [2024-11-20 04:59:56.407963] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:09:42.517  [2024-11-20 04:59:56.427642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:42.517  [2024-11-20 04:59:56.460120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:42.517  [2024-11-20 04:59:56.460122] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:42.775   04:59:56 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:42.775   04:59:56 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:42.775   04:59:56 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:43.034  Malloc0
00:09:43.034   04:59:56 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:43.293  Malloc1
00:09:43.293   04:59:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:43.293   04:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:43.552  /dev/nbd0
00:09:43.552    04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:43.552   04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:43.552  1+0 records in
00:09:43.552  1+0 records out
00:09:43.552  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238551 s, 17.2 MB/s
00:09:43.552    04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:43.552   04:59:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:43.552   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:43.552   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:43.552   04:59:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:43.811  /dev/nbd1
00:09:43.812    04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:43.812   04:59:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:43.812  1+0 records in
00:09:43.812  1+0 records out
00:09:43.812  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00028813 s, 14.2 MB/s
00:09:43.812    04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:43.812   04:59:57 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:43.812   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:43.812   04:59:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:43.812    04:59:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:44.070    04:59:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:44.070     04:59:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:44.329    04:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:44.329    {
00:09:44.329      "nbd_device": "/dev/nbd0",
00:09:44.329      "bdev_name": "Malloc0"
00:09:44.329    },
00:09:44.329    {
00:09:44.329      "nbd_device": "/dev/nbd1",
00:09:44.329      "bdev_name": "Malloc1"
00:09:44.329    }
00:09:44.329  ]'
00:09:44.329     04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:44.329    {
00:09:44.329      "nbd_device": "/dev/nbd0",
00:09:44.329      "bdev_name": "Malloc0"
00:09:44.329    },
00:09:44.329    {
00:09:44.329      "nbd_device": "/dev/nbd1",
00:09:44.329      "bdev_name": "Malloc1"
00:09:44.329    }
00:09:44.329  ]'
00:09:44.329     04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:44.329    04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:44.329  /dev/nbd1'
00:09:44.329     04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:44.329  /dev/nbd1'
00:09:44.329     04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:44.329    04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:44.329    04:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:44.329  256+0 records in
00:09:44.329  256+0 records out
00:09:44.329  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00839387 s, 125 MB/s
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:44.329  256+0 records in
00:09:44.329  256+0 records out
00:09:44.329  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240587 s, 43.6 MB/s
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:44.329  256+0 records in
00:09:44.329  256+0 records out
00:09:44.329  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0300153 s, 34.9 MB/s
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:44.329   04:59:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:44.330   04:59:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:44.588    04:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:44.588   04:59:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:44.847    04:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:44.847   04:59:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:44.847    04:59:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:44.847    04:59:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:44.847     04:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:45.106    04:59:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:45.106     04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:45.106     04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:45.106    04:59:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:45.106     04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:45.106     04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:45.106     04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:45.106    04:59:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:45.106    04:59:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:45.106   04:59:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:45.106   04:59:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:45.106   04:59:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:45.106   04:59:58 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:45.366   04:59:59 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:45.627  [2024-11-20 04:59:59.471351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:45.627  [2024-11-20 04:59:59.495355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:45.627  [2024-11-20 04:59:59.495363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:45.627  [2024-11-20 04:59:59.548176] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:45.627  [2024-11-20 04:59:59.548291] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:48.916   05:00:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:48.916   05:00:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1'
00:09:48.916  spdk_app_start Round 1
00:09:48.916   05:00:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125256 /var/tmp/spdk-nbd.sock
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 125256 ']'
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:48.916  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:48.916   05:00:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:48.916   05:00:02 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:48.916  Malloc0
00:09:48.916   05:00:02 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:49.176  Malloc1
00:09:49.176   05:00:03 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:49.176   05:00:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:49.435  /dev/nbd0
00:09:49.435    05:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:49.435   05:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:49.435  1+0 records in
00:09:49.435  1+0 records out
00:09:49.435  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542105 s, 7.6 MB/s
00:09:49.435    05:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:49.435   05:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:49.436   05:00:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:49.436   05:00:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:49.436   05:00:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:49.436   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:49.436   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:49.436   05:00:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:49.693  /dev/nbd1
00:09:49.693    05:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:49.693   05:00:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:49.693   05:00:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:49.693   05:00:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:49.694  1+0 records in
00:09:49.694  1+0 records out
00:09:49.694  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000562125 s, 7.3 MB/s
00:09:49.694    05:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:49.694   05:00:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:49.694   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:49.694   05:00:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:49.694    05:00:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:49.694    05:00:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:49.694     05:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:49.952    05:00:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:49.952    {
00:09:49.952      "nbd_device": "/dev/nbd0",
00:09:49.952      "bdev_name": "Malloc0"
00:09:49.952    },
00:09:49.952    {
00:09:49.952      "nbd_device": "/dev/nbd1",
00:09:49.952      "bdev_name": "Malloc1"
00:09:49.952    }
00:09:49.952  ]'
00:09:49.952     05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:49.952    {
00:09:49.952      "nbd_device": "/dev/nbd0",
00:09:49.952      "bdev_name": "Malloc0"
00:09:49.952    },
00:09:49.952    {
00:09:49.952      "nbd_device": "/dev/nbd1",
00:09:49.952      "bdev_name": "Malloc1"
00:09:49.952    }
00:09:49.952  ]'
00:09:49.952     05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:50.211    05:00:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:50.211  /dev/nbd1'
00:09:50.211     05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:50.211  /dev/nbd1'
00:09:50.211     05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:50.211    05:00:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:50.211    05:00:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:50.211  256+0 records in
00:09:50.211  256+0 records out
00:09:50.211  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00634182 s, 165 MB/s
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:50.211  256+0 records in
00:09:50.211  256+0 records out
00:09:50.211  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250909 s, 41.8 MB/s
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:50.211  256+0 records in
00:09:50.211  256+0 records out
00:09:50.211  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291129 s, 36.0 MB/s
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:50.211   05:00:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:50.211   05:00:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:50.470    05:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:50.470   05:00:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:50.728    05:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:50.728   05:00:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:50.728    05:00:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:50.729    05:00:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:50.729     05:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:51.045    05:00:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:51.045     05:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:51.045     05:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:51.045    05:00:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:51.045     05:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:51.045     05:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:51.045     05:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:51.045    05:00:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:51.045    05:00:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:51.045   05:00:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:51.045   05:00:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:51.045   05:00:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:51.045   05:00:04 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:51.310   05:00:05 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:51.310  [2024-11-20 05:00:05.154710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:51.310  [2024-11-20 05:00:05.178848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:51.310  [2024-11-20 05:00:05.178851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:51.310  [2024-11-20 05:00:05.233021] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:51.310  [2024-11-20 05:00:05.233135] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:09:54.599  spdk_app_start Round 2
00:09:54.599  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:09:54.599   05:00:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2}
00:09:54.599   05:00:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2'
00:09:54.599   05:00:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 125256 /var/tmp/spdk-nbd.sock
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 125256 ']'
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:09:54.599   05:00:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:09:54.599   05:00:08 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:54.599  Malloc0
00:09:54.599   05:00:08 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096
00:09:54.858  Malloc1
00:09:54.859   05:00:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1'
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1')
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:54.859   05:00:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:09:55.118  /dev/nbd0
00:09:55.118    05:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:09:55.118   05:00:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:09:55.118   05:00:08 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:55.119  1+0 records in
00:09:55.119  1+0 records out
00:09:55.119  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000972183 s, 4.2 MB/s
00:09:55.119    05:00:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:55.119   05:00:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:55.119   05:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:55.119   05:00:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:55.119   05:00:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1
00:09:55.377  /dev/nbd1
00:09:55.377    05:00:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:09:55.377   05:00:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@877 -- # break
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct
00:09:55.377  1+0 records in
00:09:55.377  1+0 records out
00:09:55.377  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569015 s, 7.2 MB/s
00:09:55.377    05:00:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:09:55.377   05:00:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0
00:09:55.377   05:00:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:09:55.377   05:00:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:09:55.377    05:00:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:55.377    05:00:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.377     05:00:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:55.945    05:00:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:09:55.945    {
00:09:55.945      "nbd_device": "/dev/nbd0",
00:09:55.945      "bdev_name": "Malloc0"
00:09:55.945    },
00:09:55.945    {
00:09:55.945      "nbd_device": "/dev/nbd1",
00:09:55.945      "bdev_name": "Malloc1"
00:09:55.945    }
00:09:55.945  ]'
00:09:55.945     05:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[
00:09:55.945    {
00:09:55.945      "nbd_device": "/dev/nbd0",
00:09:55.945      "bdev_name": "Malloc0"
00:09:55.945    },
00:09:55.945    {
00:09:55.945      "nbd_device": "/dev/nbd1",
00:09:55.945      "bdev_name": "Malloc1"
00:09:55.945    }
00:09:55.945  ]'
00:09:55.945     05:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:55.945    05:00:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:09:55.945  /dev/nbd1'
00:09:55.945     05:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:09:55.945  /dev/nbd1'
00:09:55.945     05:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:55.945    05:00:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2
00:09:55.945    05:00:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256
00:09:55.945  256+0 records in
00:09:55.945  256+0 records out
00:09:55.945  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106561 s, 98.4 MB/s
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:09:55.945  256+0 records in
00:09:55.945  256+0 records out
00:09:55.945  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242027 s, 43.3 MB/s
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:09:55.945  256+0 records in
00:09:55.945  256+0 records out
00:09:55.945  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276233 s, 38.0 MB/s
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:55.945   05:00:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:09:56.204    05:00:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:09:56.204   05:00:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:09:56.463    05:00:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break
00:09:56.463   05:00:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0
00:09:56.463    05:00:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:09:56.463    05:00:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:09:56.463     05:00:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:09:56.722    05:00:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:09:56.722     05:00:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]'
00:09:56.722     05:00:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:09:56.722    05:00:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:09:56.722     05:00:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo ''
00:09:56.722     05:00:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:09:56.722     05:00:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true
00:09:56.722    05:00:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0
00:09:56.722    05:00:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0
00:09:56.722   05:00:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0
00:09:56.722   05:00:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:09:56.722   05:00:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0
00:09:56.722   05:00:10 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM
00:09:56.980   05:00:10 event.app_repeat -- event/event.sh@35 -- # sleep 3
00:09:57.239  [2024-11-20 05:00:10.994295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:09:57.239  [2024-11-20 05:00:11.018586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:09:57.239  [2024-11-20 05:00:11.018590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:09:57.239  [2024-11-20 05:00:11.074945] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered.
00:09:57.239  [2024-11-20 05:00:11.075030] notify.c:  45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered.
00:10:00.526  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:00.526   05:00:13 event.app_repeat -- event/event.sh@38 -- # waitforlisten 125256 /var/tmp/spdk-nbd.sock
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 125256 ']'
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:00.526   05:00:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0
00:10:00.526   05:00:14 event.app_repeat -- event/event.sh@39 -- # killprocess 125256
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 125256 ']'
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 125256
00:10:00.526    05:00:14 event.app_repeat -- common/autotest_common.sh@959 -- # uname
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:00.526    05:00:14 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125256
00:10:00.526  killing process with pid 125256
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125256'
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@973 -- # kill 125256
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@978 -- # wait 125256
00:10:00.526  spdk_app_start is called in Round 0.
00:10:00.526  Shutdown signal received, stop current app iteration
00:10:00.526  Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 reinitialization...
00:10:00.526  spdk_app_start is called in Round 1.
00:10:00.526  Shutdown signal received, stop current app iteration
00:10:00.526  Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 reinitialization...
00:10:00.526  spdk_app_start is called in Round 2.
00:10:00.526  Shutdown signal received, stop current app iteration
00:10:00.526  Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 reinitialization...
00:10:00.526  spdk_app_start is called in Round 3.
00:10:00.526  Shutdown signal received, stop current app iteration
00:10:00.526  ************************************
00:10:00.526  END TEST app_repeat
00:10:00.526  ************************************
00:10:00.526   05:00:14 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT
00:10:00.526   05:00:14 event.app_repeat -- event/event.sh@42 -- # return 0
00:10:00.526  
00:10:00.526  real	0m18.138s
00:10:00.526  user	0m41.082s
00:10:00.526  sys	0m2.586s
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:00.526   05:00:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x
00:10:00.526   05:00:14 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 ))
00:10:00.526   05:00:14 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:10:00.526   05:00:14 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:00.526   05:00:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:00.526   05:00:14 event -- common/autotest_common.sh@10 -- # set +x
00:10:00.526  ************************************
00:10:00.526  START TEST cpu_locks
00:10:00.526  ************************************
00:10:00.526   05:00:14 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh
00:10:00.526  * Looking for test storage...
00:10:00.526  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:10:00.526    05:00:14 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:00.526     05:00:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version
00:10:00.526     05:00:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:00.786    05:00:14 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-:
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-:
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<'
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@345 -- # : 1
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=1
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 1
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@353 -- # local d=2
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:00.786     05:00:14 event.cpu_locks -- scripts/common.sh@355 -- # echo 2
00:10:00.786    05:00:14 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2
00:10:00.787    05:00:14 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:00.787    05:00:14 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:00.787    05:00:14 event.cpu_locks -- scripts/common.sh@368 -- # return 0
00:10:00.787    05:00:14 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:00.787    05:00:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:00.787  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.787  		--rc genhtml_branch_coverage=1
00:10:00.787  		--rc genhtml_function_coverage=1
00:10:00.787  		--rc genhtml_legend=1
00:10:00.787  		--rc geninfo_all_blocks=1
00:10:00.787  		--rc geninfo_unexecuted_blocks=1
00:10:00.787  		
00:10:00.787  		'
00:10:00.787    05:00:14 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:00.787  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.787  		--rc genhtml_branch_coverage=1
00:10:00.787  		--rc genhtml_function_coverage=1
00:10:00.787  		--rc genhtml_legend=1
00:10:00.787  		--rc geninfo_all_blocks=1
00:10:00.787  		--rc geninfo_unexecuted_blocks=1
00:10:00.787  		
00:10:00.787  		'
00:10:00.787    05:00:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:00.787  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.787  		--rc genhtml_branch_coverage=1
00:10:00.787  		--rc genhtml_function_coverage=1
00:10:00.787  		--rc genhtml_legend=1
00:10:00.787  		--rc geninfo_all_blocks=1
00:10:00.787  		--rc geninfo_unexecuted_blocks=1
00:10:00.787  		
00:10:00.787  		'
00:10:00.787    05:00:14 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:00.787  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:00.787  		--rc genhtml_branch_coverage=1
00:10:00.787  		--rc genhtml_function_coverage=1
00:10:00.787  		--rc genhtml_legend=1
00:10:00.787  		--rc geninfo_all_blocks=1
00:10:00.787  		--rc geninfo_unexecuted_blocks=1
00:10:00.787  		
00:10:00.787  		'
00:10:00.787   05:00:14 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock
00:10:00.787   05:00:14 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock
00:10:00.787   05:00:14 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT
00:10:00.787   05:00:14 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks
00:10:00.787   05:00:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:00.787   05:00:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:00.787   05:00:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:00.787  ************************************
00:10:00.787  START TEST default_locks
00:10:00.787  ************************************
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=125770
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 125770
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 125770 ']'
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:00.787  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:00.787   05:00:14 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:00.787  [2024-11-20 05:00:14.655794] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:00.787  [2024-11-20 05:00:14.656079] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125770 ]
00:10:01.046  [2024-11-20 05:00:14.808878] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:01.046  [2024-11-20 05:00:14.834479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:01.046  [2024-11-20 05:00:14.872648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:01.613   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:01.613   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0
00:10:01.613   05:00:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 125770
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 125770
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 125770
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 125770 ']'
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 125770
00:10:01.872    05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname
00:10:01.872   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:01.872    05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125770
00:10:02.130   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:02.130   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:02.130  killing process with pid 125770
00:10:02.130   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125770'
00:10:02.130   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 125770
00:10:02.130   05:00:15 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 125770
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 125770
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 125770
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:02.389    05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 125770
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 125770 ']'
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:02.389  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:02.389  ERROR: process (pid: 125770) is no longer running
00:10:02.389  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (125770) - No such process
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:02.389  
00:10:02.389  real	0m1.702s
00:10:02.389  user	0m1.700s
00:10:02.389  sys	0m0.623s
00:10:02.389  ************************************
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:02.389   05:00:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x
00:10:02.389  END TEST default_locks
00:10:02.389  ************************************
00:10:02.389   05:00:16 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc
00:10:02.389   05:00:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:02.389   05:00:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:02.389   05:00:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:02.389  ************************************
00:10:02.389  START TEST default_locks_via_rpc
00:10:02.389  ************************************
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=125826
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 125826
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 125826 ']'
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:02.389  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:02.389   05:00:16 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:02.648  [2024-11-20 05:00:16.404567] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:02.648  [2024-11-20 05:00:16.405135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125826 ]
00:10:02.648  [2024-11-20 05:00:16.558039] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:02.648  [2024-11-20 05:00:16.583677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:02.907  [2024-11-20 05:00:16.616212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=()
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 ))
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 125826
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 125826
00:10:03.474   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 125826
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 125826 ']'
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 125826
00:10:03.733    05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:03.733    05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125826
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:03.733  killing process with pid 125826
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125826'
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 125826
00:10:03.733   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 125826
00:10:04.299  ************************************
00:10:04.299  END TEST default_locks_via_rpc
00:10:04.299  ************************************
00:10:04.299  
00:10:04.299  real	0m1.644s
00:10:04.299  user	0m1.696s
00:10:04.299  sys	0m0.522s
00:10:04.299   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:04.299   05:00:17 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:04.299   05:00:18 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask
00:10:04.299   05:00:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:04.299   05:00:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:04.299   05:00:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:04.299  ************************************
00:10:04.299  START TEST non_locking_app_on_locked_coremask
00:10:04.299  ************************************
00:10:04.299  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=125879
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 125879 /var/tmp/spdk.sock
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125879 ']'
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:04.299   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:04.300   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:04.300   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:04.300   05:00:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:04.300  [2024-11-20 05:00:18.089525] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:04.300  [2024-11-20 05:00:18.089827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125879 ]
00:10:04.300  [2024-11-20 05:00:18.239210] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:04.558  [2024-11-20 05:00:18.267205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:04.558  [2024-11-20 05:00:18.303112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=125900
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 125900 /var/tmp/spdk2.sock
00:10:05.125  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125900 ']'
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:05.125   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock
00:10:05.492  [2024-11-20 05:00:19.119809] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:05.492  [2024-11-20 05:00:19.120144] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125900 ]
00:10:05.492  [2024-11-20 05:00:19.276289] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:05.492  [2024-11-20 05:00:19.318701] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:05.492  [2024-11-20 05:00:19.318757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:05.492  [2024-11-20 05:00:19.380490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:06.057   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:06.057   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:06.057   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 125879
00:10:06.057   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125879
00:10:06.057   05:00:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 125879
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125879 ']'
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 125879
00:10:06.623    05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:06.623    05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125879
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:06.623  killing process with pid 125879
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125879'
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 125879
00:10:06.623   05:00:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 125879
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 125900
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125900 ']'
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 125900
00:10:07.559    05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:07.559    05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125900
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:07.559  killing process with pid 125900
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125900'
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 125900
00:10:07.559   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 125900
00:10:07.818  
00:10:07.818  real	0m3.620s
00:10:07.818  user	0m3.971s
00:10:07.818  sys	0m1.068s
00:10:07.818   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:07.818  ************************************
00:10:07.818  END TEST non_locking_app_on_locked_coremask
00:10:07.818   05:00:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:07.818  ************************************
00:10:07.818   05:00:21 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask
00:10:07.818   05:00:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:07.818   05:00:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:07.818   05:00:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:07.818  ************************************
00:10:07.818  START TEST locking_app_on_unlocked_coremask
00:10:07.818  ************************************
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=125975
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 125975 /var/tmp/spdk.sock
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125975 ']'
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:07.818  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:07.818   05:00:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:07.818  [2024-11-20 05:00:21.753542] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:07.818  [2024-11-20 05:00:21.753802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125975 ]
00:10:08.076  [2024-11-20 05:00:21.888428] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:08.076  [2024-11-20 05:00:21.911698] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:08.076  [2024-11-20 05:00:21.911764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:08.076  [2024-11-20 05:00:21.946515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=125990
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 125990 /var/tmp/spdk2.sock
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 125990 ']'
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:09.012  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:09.012   05:00:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:09.012  [2024-11-20 05:00:22.793063] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:09.012  [2024-11-20 05:00:22.793383] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125990 ]
00:10:09.012  [2024-11-20 05:00:22.948400] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:09.270  [2024-11-20 05:00:22.983412] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:09.270  [2024-11-20 05:00:23.039498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:09.837   05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:09.837   05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:09.837   05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 125990
00:10:09.837   05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 125990
00:10:09.837   05:00:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 125975
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125975 ']'
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125975
00:10:10.404    05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:10.404    05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125975
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:10.404  killing process with pid 125975
00:10:10.404   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125975'
00:10:10.405   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125975
00:10:10.405   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125975
00:10:11.340   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 125990
00:10:11.340   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 125990 ']'
00:10:11.340   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 125990
00:10:11.340    05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:11.340   05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:11.340    05:00:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125990
00:10:11.340   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:11.340   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:11.340  killing process with pid 125990
00:10:11.340   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125990'
00:10:11.340   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 125990
00:10:11.340   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 125990
00:10:11.599  
00:10:11.599  real	0m3.731s
00:10:11.599  user	0m4.183s
00:10:11.599  sys	0m1.091s
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:11.599  ************************************
00:10:11.599  END TEST locking_app_on_unlocked_coremask
00:10:11.599  ************************************
00:10:11.599   05:00:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask
00:10:11.599   05:00:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:11.599   05:00:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:11.599   05:00:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:11.599  ************************************
00:10:11.599  START TEST locking_app_on_locked_coremask
00:10:11.599  ************************************
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=126066
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 126066 /var/tmp/spdk.sock
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 126066 ']'
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:11.599  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:11.599   05:00:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:11.858  [2024-11-20 05:00:25.567867] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:11.858  [2024-11-20 05:00:25.568197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126066 ]
00:10:11.858  [2024-11-20 05:00:25.721638] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:11.858  [2024-11-20 05:00:25.747148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:11.858  [2024-11-20 05:00:25.777974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=126087
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 126087 /var/tmp/spdk2.sock
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 126087 /var/tmp/spdk2.sock
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:12.794    05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 126087 /var/tmp/spdk2.sock
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 126087 ']'
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:12.794  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:12.794   05:00:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:12.794  [2024-11-20 05:00:26.582834] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:12.794  [2024-11-20 05:00:26.583157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126087 ]
00:10:12.794  [2024-11-20 05:00:26.736263] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:13.053  [2024-11-20 05:00:26.770448] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 126066 has claimed it.
00:10:13.053  [2024-11-20 05:00:26.770520] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:10:13.312  ERROR: process (pid: 126087) is no longer running
00:10:13.312  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (126087) - No such process
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 126066
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 126066
00:10:13.312   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 126066
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 126066 ']'
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 126066
00:10:13.572    05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:13.572    05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126066
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:13.572  killing process with pid 126066
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126066'
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 126066
00:10:13.572   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 126066
00:10:14.139  
00:10:14.140  real	0m2.321s
00:10:14.140  user	0m2.577s
00:10:14.140  sys	0m0.682s
00:10:14.140   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:14.140  ************************************
00:10:14.140  END TEST locking_app_on_locked_coremask
00:10:14.140  ************************************
00:10:14.140   05:00:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:14.140   05:00:27 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask
00:10:14.140   05:00:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:14.140   05:00:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:14.140   05:00:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:14.140  ************************************
00:10:14.140  START TEST locking_overlapped_coremask
00:10:14.140  ************************************
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=126132
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 126132 /var/tmp/spdk.sock
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 126132 ']'
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:14.140  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:14.140   05:00:27 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7
00:10:14.140  [2024-11-20 05:00:27.935460] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:14.140  [2024-11-20 05:00:27.936056] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126132 ]
00:10:14.398  [2024-11-20 05:00:28.101646] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:14.398  [2024-11-20 05:00:28.121432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:14.398  [2024-11-20 05:00:28.156895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:14.398  [2024-11-20 05:00:28.156973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:14.398  [2024-11-20 05:00:28.156982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=126155
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 126155 /var/tmp/spdk2.sock
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 126155 /var/tmp/spdk2.sock
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:14.965    05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 126155 /var/tmp/spdk2.sock
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 126155 ']'
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:14.965  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:14.965   05:00:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:15.224  [2024-11-20 05:00:28.959459] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:15.224  [2024-11-20 05:00:28.959680] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126155 ]
00:10:15.224  [2024-11-20 05:00:29.120607] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:15.224  [2024-11-20 05:00:29.157062] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126132 has claimed it.
00:10:15.224  [2024-11-20 05:00:29.157138] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting.
00:10:15.791  ERROR: process (pid: 126155) is no longer running
00:10:15.791  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (126155) - No such process
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 126132
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 126132 ']'
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 126132
00:10:15.791    05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:15.791    05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126132
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:15.791  killing process with pid 126132
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126132'
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 126132
00:10:15.791   05:00:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 126132
00:10:16.359  
00:10:16.360  real	0m2.247s
00:10:16.360  user	0m6.172s
00:10:16.360  sys	0m0.541s
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:16.360  ************************************
00:10:16.360  END TEST locking_overlapped_coremask
00:10:16.360  ************************************
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x
00:10:16.360   05:00:30 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc
00:10:16.360   05:00:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:16.360   05:00:30 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:16.360   05:00:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:16.360  ************************************
00:10:16.360  START TEST locking_overlapped_coremask_via_rpc
00:10:16.360  ************************************
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=126200
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 126200 /var/tmp/spdk.sock
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126200 ']'
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:16.360  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:16.360   05:00:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:16.360  [2024-11-20 05:00:30.225950] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:16.360  [2024-11-20 05:00:30.226200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126200 ]
00:10:16.619  [2024-11-20 05:00:30.379864] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:16.619  [2024-11-20 05:00:30.398808] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:16.619  [2024-11-20 05:00:30.398878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:16.619  [2024-11-20 05:00:30.439817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:16.619  [2024-11-20 05:00:30.439956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:16.619  [2024-11-20 05:00:30.439957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=126223
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 126223 /var/tmp/spdk2.sock
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126223 ']'
00:10:17.556  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:17.556   05:00:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:17.556  [2024-11-20 05:00:31.201576] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:17.556  [2024-11-20 05:00:31.201801] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126223 ]
00:10:17.556  [2024-11-20 05:00:31.364928] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:17.556  [2024-11-20 05:00:31.403755] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated.
00:10:17.556  [2024-11-20 05:00:31.403810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:17.556  [2024-11-20 05:00:31.495482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:10:17.556  [2024-11-20 05:00:31.507590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:17.556  [2024-11-20 05:00:31.507590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:18.494    05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:18.494  [2024-11-20 05:00:32.211613] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 126200 has claimed it.
00:10:18.494  request:
00:10:18.494  {
00:10:18.494  "method": "framework_enable_cpumask_locks",
00:10:18.494  "req_id": 1
00:10:18.494  }
00:10:18.494  Got JSON-RPC error response
00:10:18.494  response:
00:10:18.494  {
00:10:18.494  "code": -32603,
00:10:18.494  "message": "Failed to claim CPU core: 2"
00:10:18.494  }
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 126200 /var/tmp/spdk.sock
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126200 ']'
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:18.494  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:18.494   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 126223 /var/tmp/spdk2.sock
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 126223 ']'
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock
00:10:18.754  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...'
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:18.754   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*)
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002})
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]]
00:10:19.013  
00:10:19.013  real	0m2.651s
00:10:19.013  user	0m1.365s
00:10:19.013  sys	0m0.228s
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:19.013  ************************************
00:10:19.013  END TEST locking_overlapped_coremask_via_rpc
00:10:19.013   05:00:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x
00:10:19.013  ************************************
00:10:19.013   05:00:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup
00:10:19.013   05:00:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126200 ]]
00:10:19.013   05:00:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126200
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126200 ']'
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126200
00:10:19.013    05:00:32 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:19.013    05:00:32 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126200
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:19.013  killing process with pid 126200
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126200'
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126200
00:10:19.013   05:00:32 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126200
00:10:19.580   05:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126223 ]]
00:10:19.581   05:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126223
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126223 ']'
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126223
00:10:19.581    05:00:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:19.581    05:00:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126223
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2
00:10:19.581  killing process with pid 126223
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']'
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126223'
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 126223
00:10:19.581   05:00:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 126223
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 126200 ]]
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 126200
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126200 ']'
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126200
00:10:20.150  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (126200) - No such process
00:10:20.150  Process with pid 126200 is not found
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126200 is not found'
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 126223 ]]
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 126223
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 126223 ']'
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 126223
00:10:20.150  /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (126223) - No such process
00:10:20.150  Process with pid 126223 is not found
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 126223 is not found'
00:10:20.150   05:00:33 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f
00:10:20.150  
00:10:20.150  real	0m19.508s
00:10:20.150  user	0m35.056s
00:10:20.150  sys	0m5.703s
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:20.150  ************************************
00:10:20.150   05:00:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x
00:10:20.150  END TEST cpu_locks
00:10:20.150  ************************************
00:10:20.150  
00:10:20.150  real	0m46.760s
00:10:20.150  user	1m31.261s
00:10:20.150  sys	0m9.213s
00:10:20.150   05:00:33 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:20.150  ************************************
00:10:20.150  END TEST event
00:10:20.150   05:00:33 event -- common/autotest_common.sh@10 -- # set +x
00:10:20.150  ************************************
00:10:20.150   05:00:33  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:10:20.150   05:00:33  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:20.150   05:00:33  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:20.150   05:00:33  -- common/autotest_common.sh@10 -- # set +x
00:10:20.150  ************************************
00:10:20.150  START TEST thread
00:10:20.150  ************************************
00:10:20.150   05:00:33 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:10:20.150  * Looking for test storage...
00:10:20.150  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:10:20.150    05:00:34 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:20.150     05:00:34 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:20.150     05:00:34 thread -- common/autotest_common.sh@1693 -- # lcov --version
00:10:20.409    05:00:34 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:20.409    05:00:34 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:20.409    05:00:34 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:20.409    05:00:34 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:20.409    05:00:34 thread -- scripts/common.sh@336 -- # IFS=.-:
00:10:20.409    05:00:34 thread -- scripts/common.sh@336 -- # read -ra ver1
00:10:20.409    05:00:34 thread -- scripts/common.sh@337 -- # IFS=.-:
00:10:20.409    05:00:34 thread -- scripts/common.sh@337 -- # read -ra ver2
00:10:20.409    05:00:34 thread -- scripts/common.sh@338 -- # local 'op=<'
00:10:20.409    05:00:34 thread -- scripts/common.sh@340 -- # ver1_l=2
00:10:20.409    05:00:34 thread -- scripts/common.sh@341 -- # ver2_l=1
00:10:20.409    05:00:34 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:20.409    05:00:34 thread -- scripts/common.sh@344 -- # case "$op" in
00:10:20.409    05:00:34 thread -- scripts/common.sh@345 -- # : 1
00:10:20.409    05:00:34 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:20.409    05:00:34 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:20.409     05:00:34 thread -- scripts/common.sh@365 -- # decimal 1
00:10:20.409     05:00:34 thread -- scripts/common.sh@353 -- # local d=1
00:10:20.409     05:00:34 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:20.409     05:00:34 thread -- scripts/common.sh@355 -- # echo 1
00:10:20.409    05:00:34 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:10:20.409     05:00:34 thread -- scripts/common.sh@366 -- # decimal 2
00:10:20.409     05:00:34 thread -- scripts/common.sh@353 -- # local d=2
00:10:20.409     05:00:34 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:20.409     05:00:34 thread -- scripts/common.sh@355 -- # echo 2
00:10:20.409    05:00:34 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:10:20.409    05:00:34 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:20.409    05:00:34 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:20.409    05:00:34 thread -- scripts/common.sh@368 -- # return 0
00:10:20.409    05:00:34 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:20.409    05:00:34 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:20.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:20.409  		--rc genhtml_branch_coverage=1
00:10:20.409  		--rc genhtml_function_coverage=1
00:10:20.409  		--rc genhtml_legend=1
00:10:20.409  		--rc geninfo_all_blocks=1
00:10:20.409  		--rc geninfo_unexecuted_blocks=1
00:10:20.409  		
00:10:20.409  		'
00:10:20.409    05:00:34 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:20.409  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:20.409  		--rc genhtml_branch_coverage=1
00:10:20.409  		--rc genhtml_function_coverage=1
00:10:20.409  		--rc genhtml_legend=1
00:10:20.410  		--rc geninfo_all_blocks=1
00:10:20.410  		--rc geninfo_unexecuted_blocks=1
00:10:20.410  		
00:10:20.410  		'
00:10:20.410    05:00:34 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:20.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:20.410  		--rc genhtml_branch_coverage=1
00:10:20.410  		--rc genhtml_function_coverage=1
00:10:20.410  		--rc genhtml_legend=1
00:10:20.410  		--rc geninfo_all_blocks=1
00:10:20.410  		--rc geninfo_unexecuted_blocks=1
00:10:20.410  		
00:10:20.410  		'
00:10:20.410    05:00:34 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:20.410  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:20.410  		--rc genhtml_branch_coverage=1
00:10:20.410  		--rc genhtml_function_coverage=1
00:10:20.410  		--rc genhtml_legend=1
00:10:20.410  		--rc geninfo_all_blocks=1
00:10:20.410  		--rc geninfo_unexecuted_blocks=1
00:10:20.410  		
00:10:20.410  		'
00:10:20.410   05:00:34 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:10:20.410   05:00:34 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:10:20.410   05:00:34 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:20.410   05:00:34 thread -- common/autotest_common.sh@10 -- # set +x
00:10:20.410  ************************************
00:10:20.410  START TEST thread_poller_perf
00:10:20.410  ************************************
00:10:20.410   05:00:34 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:10:20.410  [2024-11-20 05:00:34.182271] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:20.410  [2024-11-20 05:00:34.182475] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126370 ]
00:10:20.410  [2024-11-20 05:00:34.317437] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:20.410  [2024-11-20 05:00:34.342941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:20.669  [2024-11-20 05:00:34.375733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:20.669  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:10:21.632  
[2024-11-20T05:00:35.589Z]  ======================================
00:10:21.632  
[2024-11-20T05:00:35.589Z]  busy:2208829654 (cyc)
00:10:21.632  
[2024-11-20T05:00:35.589Z]  total_run_count: 397000
00:10:21.632  
[2024-11-20T05:00:35.589Z]  tsc_hz: 2200000000 (cyc)
00:10:21.632  
[2024-11-20T05:00:35.589Z]  ======================================
00:10:21.632  
[2024-11-20T05:00:35.589Z]  poller_cost: 5563 (cyc), 2528 (nsec)
00:10:21.632  
00:10:21.632  real	0m1.298s
00:10:21.632  user	0m1.122s
00:10:21.632  sys	0m0.076s
00:10:21.632   05:00:35 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:21.632   05:00:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:10:21.632  ************************************
00:10:21.632  END TEST thread_poller_perf
00:10:21.632  ************************************
00:10:21.632   05:00:35 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:21.632   05:00:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:10:21.632   05:00:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:21.632   05:00:35 thread -- common/autotest_common.sh@10 -- # set +x
00:10:21.632  ************************************
00:10:21.632  START TEST thread_poller_perf
00:10:21.632  ************************************
00:10:21.632   05:00:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:10:21.632  [2024-11-20 05:00:35.547794] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:21.632  [2024-11-20 05:00:35.548268] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126411 ]
00:10:21.891  [2024-11-20 05:00:35.701254] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:21.891  [2024-11-20 05:00:35.727159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:21.891  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:10:21.891  [2024-11-20 05:00:35.757751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:23.268  
[2024-11-20T05:00:37.225Z]  ======================================
00:10:23.268  
[2024-11-20T05:00:37.225Z]  busy:2202954746 (cyc)
00:10:23.268  
[2024-11-20T05:00:37.225Z]  total_run_count: 4962000
00:10:23.268  
[2024-11-20T05:00:37.225Z]  tsc_hz: 2200000000 (cyc)
00:10:23.268  
[2024-11-20T05:00:37.225Z]  ======================================
00:10:23.268  
[2024-11-20T05:00:37.225Z]  poller_cost: 443 (cyc), 201 (nsec)
00:10:23.268  
00:10:23.268  real	0m1.321s
00:10:23.268  user	0m1.130s
00:10:23.268  sys	0m0.086s
00:10:23.268   05:00:36 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:23.268   05:00:36 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:10:23.268  ************************************
00:10:23.268  END TEST thread_poller_perf
00:10:23.268  ************************************
00:10:23.268   05:00:36 thread -- thread/thread.sh@17 -- # [[ n != \y ]]
00:10:23.268   05:00:36 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:10:23.268   05:00:36 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:23.268   05:00:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:23.268   05:00:36 thread -- common/autotest_common.sh@10 -- # set +x
00:10:23.268  ************************************
00:10:23.268  START TEST thread_spdk_lock
00:10:23.268  ************************************
00:10:23.268   05:00:36 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:10:23.268  [2024-11-20 05:00:36.925457] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:23.268  [2024-11-20 05:00:36.925929] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126458 ]
00:10:23.268  [2024-11-20 05:00:37.087579] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:23.268  [2024-11-20 05:00:37.107476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:10:23.268  [2024-11-20 05:00:37.144755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:23.268  [2024-11-20 05:00:37.144762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:24.202  [2024-11-20 05:00:37.795616] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 980:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:24.202  [2024-11-20 05:00:37.795702] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3112:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:10:24.202  [2024-11-20 05:00:37.795737] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3067:sspin_stacks_print: *ERROR*: spinlock 0x563fc204fbc0
00:10:24.202  [2024-11-20 05:00:37.796621] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:24.202  [2024-11-20 05:00:37.796724] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1041:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:24.202  [2024-11-20 05:00:37.796764] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 875:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:10:24.202  Starting test contend
00:10:24.202    Worker    Delay  Wait us  Hold us Total us
00:10:24.202         0        3   105180   215494   320675
00:10:24.202         1        5    27839   330644   358483
00:10:24.202  PASS test contend
00:10:24.202  Starting test hold_by_poller
00:10:24.202  PASS test hold_by_poller
00:10:24.203  Starting test hold_by_message
00:10:24.203  PASS test hold_by_message
00:10:24.203  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:10:24.203     100014 assertions passed
00:10:24.203          0 assertions failed
00:10:24.203  
00:10:24.203  real	0m0.983s
00:10:24.203  user	0m1.444s
00:10:24.203  sys	0m0.089s
00:10:24.203   05:00:37 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:24.203   05:00:37 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x
00:10:24.203  ************************************
00:10:24.203  END TEST thread_spdk_lock
00:10:24.203  ************************************
00:10:24.203  
00:10:24.203  real	0m3.926s
00:10:24.203  user	0m3.875s
00:10:24.203  sys	0m0.392s
00:10:24.203   05:00:37 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:24.203   05:00:37 thread -- common/autotest_common.sh@10 -- # set +x
00:10:24.203  ************************************
00:10:24.203  END TEST thread
00:10:24.203  ************************************
00:10:24.203   05:00:37  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:10:24.203   05:00:37  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:10:24.203   05:00:37  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:24.203   05:00:37  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:24.203   05:00:37  -- common/autotest_common.sh@10 -- # set +x
00:10:24.203  ************************************
00:10:24.203  START TEST app_cmdline
00:10:24.203  ************************************
00:10:24.203   05:00:37 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:10:24.203  * Looking for test storage...
00:10:24.203  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:24.203     05:00:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version
00:10:24.203     05:00:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@345 -- # : 1
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:24.203     05:00:38 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:24.203    05:00:38 app_cmdline -- scripts/common.sh@368 -- # return 0
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:24.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.203  		--rc genhtml_branch_coverage=1
00:10:24.203  		--rc genhtml_function_coverage=1
00:10:24.203  		--rc genhtml_legend=1
00:10:24.203  		--rc geninfo_all_blocks=1
00:10:24.203  		--rc geninfo_unexecuted_blocks=1
00:10:24.203  		
00:10:24.203  		'
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:24.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.203  		--rc genhtml_branch_coverage=1
00:10:24.203  		--rc genhtml_function_coverage=1
00:10:24.203  		--rc genhtml_legend=1
00:10:24.203  		--rc geninfo_all_blocks=1
00:10:24.203  		--rc geninfo_unexecuted_blocks=1
00:10:24.203  		
00:10:24.203  		'
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:24.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.203  		--rc genhtml_branch_coverage=1
00:10:24.203  		--rc genhtml_function_coverage=1
00:10:24.203  		--rc genhtml_legend=1
00:10:24.203  		--rc geninfo_all_blocks=1
00:10:24.203  		--rc geninfo_unexecuted_blocks=1
00:10:24.203  		
00:10:24.203  		'
00:10:24.203    05:00:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:24.203  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:24.203  		--rc genhtml_branch_coverage=1
00:10:24.203  		--rc genhtml_function_coverage=1
00:10:24.203  		--rc genhtml_legend=1
00:10:24.203  		--rc geninfo_all_blocks=1
00:10:24.203  		--rc geninfo_unexecuted_blocks=1
00:10:24.203  		
00:10:24.203  		'
00:10:24.203   05:00:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:10:24.203   05:00:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=126546
00:10:24.203   05:00:38 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:10:24.461   05:00:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 126546
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 126546 ']'
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:24.461  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:24.461   05:00:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:24.461  [2024-11-20 05:00:38.223182] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:24.461  [2024-11-20 05:00:38.223940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126546 ]
00:10:24.461  [2024-11-20 05:00:38.360613] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:24.461  [2024-11-20 05:00:38.383738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:24.461  [2024-11-20 05:00:38.415869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:24.719   05:00:38 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:24.720   05:00:38 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:10:24.720   05:00:38 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:10:25.287  {
00:10:25.287    "version": "SPDK v25.01-pre git sha1 f22e807f1",
00:10:25.287    "fields": {
00:10:25.287      "major": 25,
00:10:25.287      "minor": 1,
00:10:25.287      "patch": 0,
00:10:25.287      "suffix": "-pre",
00:10:25.287      "commit": "f22e807f1"
00:10:25.287    }
00:10:25.287  }
00:10:25.287   05:00:38 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:10:25.287   05:00:38 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:10:25.287   05:00:38 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:10:25.287   05:00:38 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:10:25.287    05:00:38 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:10:25.287    05:00:38 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:10:25.287    05:00:38 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:25.287    05:00:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:25.287    05:00:38 app_cmdline -- app/cmdline.sh@26 -- # sort
00:10:25.287    05:00:38 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:25.287   05:00:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:10:25.287   05:00:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:10:25.287   05:00:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:25.287    05:00:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:25.287    05:00:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:10:25.287   05:00:39 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:10:25.545  request:
00:10:25.546  {
00:10:25.546    "method": "env_dpdk_get_mem_stats",
00:10:25.546    "req_id": 1
00:10:25.546  }
00:10:25.546  Got JSON-RPC error response
00:10:25.546  response:
00:10:25.546  {
00:10:25.546    "code": -32601,
00:10:25.546    "message": "Method not found"
00:10:25.546  }
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:10:25.546   05:00:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 126546
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 126546 ']'
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 126546
00:10:25.546    05:00:39 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:25.546    05:00:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126546
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:25.546  killing process with pid 126546
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126546'
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 126546
00:10:25.546   05:00:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 126546
00:10:25.804  
00:10:25.804  real	0m1.732s
00:10:25.804  user	0m2.100s
00:10:25.804  sys	0m0.460s
00:10:25.804   05:00:39 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:25.804   05:00:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:10:25.804  ************************************
00:10:25.804  END TEST app_cmdline
00:10:25.804  ************************************
00:10:25.804   05:00:39  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:10:25.804   05:00:39  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:25.804   05:00:39  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:25.804   05:00:39  -- common/autotest_common.sh@10 -- # set +x
00:10:25.804  ************************************
00:10:25.804  START TEST version
00:10:25.804  ************************************
00:10:25.804   05:00:39 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:10:26.062  * Looking for test storage...
00:10:26.062  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:10:26.062    05:00:39 version -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:26.062     05:00:39 version -- common/autotest_common.sh@1693 -- # lcov --version
00:10:26.062     05:00:39 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:26.062    05:00:39 version -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:26.062    05:00:39 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:26.062    05:00:39 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:26.062    05:00:39 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:26.062    05:00:39 version -- scripts/common.sh@336 -- # IFS=.-:
00:10:26.062    05:00:39 version -- scripts/common.sh@336 -- # read -ra ver1
00:10:26.062    05:00:39 version -- scripts/common.sh@337 -- # IFS=.-:
00:10:26.063    05:00:39 version -- scripts/common.sh@337 -- # read -ra ver2
00:10:26.063    05:00:39 version -- scripts/common.sh@338 -- # local 'op=<'
00:10:26.063    05:00:39 version -- scripts/common.sh@340 -- # ver1_l=2
00:10:26.063    05:00:39 version -- scripts/common.sh@341 -- # ver2_l=1
00:10:26.063    05:00:39 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:26.063    05:00:39 version -- scripts/common.sh@344 -- # case "$op" in
00:10:26.063    05:00:39 version -- scripts/common.sh@345 -- # : 1
00:10:26.063    05:00:39 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:26.063    05:00:39 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:26.063     05:00:39 version -- scripts/common.sh@365 -- # decimal 1
00:10:26.063     05:00:39 version -- scripts/common.sh@353 -- # local d=1
00:10:26.063     05:00:39 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:26.063     05:00:39 version -- scripts/common.sh@355 -- # echo 1
00:10:26.063    05:00:39 version -- scripts/common.sh@365 -- # ver1[v]=1
00:10:26.063     05:00:39 version -- scripts/common.sh@366 -- # decimal 2
00:10:26.063     05:00:39 version -- scripts/common.sh@353 -- # local d=2
00:10:26.063     05:00:39 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:26.063     05:00:39 version -- scripts/common.sh@355 -- # echo 2
00:10:26.063    05:00:39 version -- scripts/common.sh@366 -- # ver2[v]=2
00:10:26.063    05:00:39 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:26.063    05:00:39 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:26.063    05:00:39 version -- scripts/common.sh@368 -- # return 0
00:10:26.063    05:00:39 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:26.063    05:00:39 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:26.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.063  		--rc genhtml_branch_coverage=1
00:10:26.063  		--rc genhtml_function_coverage=1
00:10:26.063  		--rc genhtml_legend=1
00:10:26.063  		--rc geninfo_all_blocks=1
00:10:26.063  		--rc geninfo_unexecuted_blocks=1
00:10:26.063  		
00:10:26.063  		'
00:10:26.063    05:00:39 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:26.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.063  		--rc genhtml_branch_coverage=1
00:10:26.063  		--rc genhtml_function_coverage=1
00:10:26.063  		--rc genhtml_legend=1
00:10:26.063  		--rc geninfo_all_blocks=1
00:10:26.063  		--rc geninfo_unexecuted_blocks=1
00:10:26.063  		
00:10:26.063  		'
00:10:26.063    05:00:39 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:26.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.063  		--rc genhtml_branch_coverage=1
00:10:26.063  		--rc genhtml_function_coverage=1
00:10:26.063  		--rc genhtml_legend=1
00:10:26.063  		--rc geninfo_all_blocks=1
00:10:26.063  		--rc geninfo_unexecuted_blocks=1
00:10:26.063  		
00:10:26.063  		'
00:10:26.063    05:00:39 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:26.063  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.063  		--rc genhtml_branch_coverage=1
00:10:26.063  		--rc genhtml_function_coverage=1
00:10:26.063  		--rc genhtml_legend=1
00:10:26.063  		--rc geninfo_all_blocks=1
00:10:26.063  		--rc geninfo_unexecuted_blocks=1
00:10:26.063  		
00:10:26.063  		'
00:10:26.063    05:00:39 version -- app/version.sh@17 -- # get_header_version major
00:10:26.063    05:00:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # cut -f2
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # tr -d '"'
00:10:26.063   05:00:39 version -- app/version.sh@17 -- # major=25
00:10:26.063    05:00:39 version -- app/version.sh@18 -- # get_header_version minor
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # cut -f2
00:10:26.063    05:00:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # tr -d '"'
00:10:26.063   05:00:39 version -- app/version.sh@18 -- # minor=1
00:10:26.063    05:00:39 version -- app/version.sh@19 -- # get_header_version patch
00:10:26.063    05:00:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # cut -f2
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # tr -d '"'
00:10:26.063   05:00:39 version -- app/version.sh@19 -- # patch=0
00:10:26.063    05:00:39 version -- app/version.sh@20 -- # get_header_version suffix
00:10:26.063    05:00:39 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # cut -f2
00:10:26.063    05:00:39 version -- app/version.sh@14 -- # tr -d '"'
00:10:26.063   05:00:39 version -- app/version.sh@20 -- # suffix=-pre
00:10:26.063   05:00:39 version -- app/version.sh@22 -- # version=25.1
00:10:26.063   05:00:39 version -- app/version.sh@25 -- # (( patch != 0 ))
00:10:26.063   05:00:39 version -- app/version.sh@28 -- # version=25.1rc0
00:10:26.063   05:00:39 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:10:26.063    05:00:39 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:10:26.063   05:00:40 version -- app/version.sh@30 -- # py_version=25.1rc0
00:10:26.063   05:00:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:10:26.063  
00:10:26.063  real	0m0.250s
00:10:26.063  user	0m0.170s
00:10:26.063  sys	0m0.125s
00:10:26.063   05:00:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:26.063   05:00:40 version -- common/autotest_common.sh@10 -- # set +x
00:10:26.063  ************************************
00:10:26.063  END TEST version
00:10:26.063  ************************************
00:10:26.322   05:00:40  -- spdk/autotest.sh@179 -- # '[' 1 -eq 1 ']'
00:10:26.323   05:00:40  -- spdk/autotest.sh@180 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:10:26.323   05:00:40  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:10:26.323   05:00:40  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:26.323   05:00:40  -- common/autotest_common.sh@10 -- # set +x
00:10:26.323  ************************************
00:10:26.323  START TEST blockdev_general
00:10:26.323  ************************************
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:10:26.323  * Looking for test storage...
00:10:26.323  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:10:26.323     05:00:40 blockdev_general -- common/autotest_common.sh@1693 -- # lcov --version
00:10:26.323     05:00:40 blockdev_general -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@333 -- # local ver1 ver1_l
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@334 -- # local ver2 ver2_l
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@336 -- # IFS=.-:
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@336 -- # read -ra ver1
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@337 -- # IFS=.-:
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@337 -- # read -ra ver2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@338 -- # local 'op=<'
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@340 -- # ver1_l=2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@341 -- # ver2_l=1
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@344 -- # case "$op" in
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@345 -- # : 1
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@364 -- # (( v = 0 ))
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@365 -- # decimal 1
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@353 -- # local d=1
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@355 -- # echo 1
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@365 -- # ver1[v]=1
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@366 -- # decimal 2
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@353 -- # local d=2
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:10:26.323     05:00:40 blockdev_general -- scripts/common.sh@355 -- # echo 2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@366 -- # ver2[v]=2
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:10:26.323    05:00:40 blockdev_general -- scripts/common.sh@368 -- # return 0
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:10:26.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.323  		--rc genhtml_branch_coverage=1
00:10:26.323  		--rc genhtml_function_coverage=1
00:10:26.323  		--rc genhtml_legend=1
00:10:26.323  		--rc geninfo_all_blocks=1
00:10:26.323  		--rc geninfo_unexecuted_blocks=1
00:10:26.323  		
00:10:26.323  		'
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:10:26.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.323  		--rc genhtml_branch_coverage=1
00:10:26.323  		--rc genhtml_function_coverage=1
00:10:26.323  		--rc genhtml_legend=1
00:10:26.323  		--rc geninfo_all_blocks=1
00:10:26.323  		--rc geninfo_unexecuted_blocks=1
00:10:26.323  		
00:10:26.323  		'
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:10:26.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.323  		--rc genhtml_branch_coverage=1
00:10:26.323  		--rc genhtml_function_coverage=1
00:10:26.323  		--rc genhtml_legend=1
00:10:26.323  		--rc geninfo_all_blocks=1
00:10:26.323  		--rc geninfo_unexecuted_blocks=1
00:10:26.323  		
00:10:26.323  		'
00:10:26.323    05:00:40 blockdev_general -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:10:26.323  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:10:26.323  		--rc genhtml_branch_coverage=1
00:10:26.323  		--rc genhtml_function_coverage=1
00:10:26.323  		--rc genhtml_legend=1
00:10:26.323  		--rc geninfo_all_blocks=1
00:10:26.323  		--rc geninfo_unexecuted_blocks=1
00:10:26.323  		
00:10:26.323  		'
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:10:26.323    05:00:40 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@20 -- # :
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:10:26.323    05:00:40 blockdev_general -- bdev/blockdev.sh@673 -- # uname -s
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@681 -- # test_type=bdev
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@682 -- # crypto_device=
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@683 -- # dek=
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@684 -- # env_ctx=
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@689 -- # [[ bdev == bdev ]]
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@690 -- # wait_for_rpc=--wait-for-rpc
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=126719
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 126719
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@835 -- # '[' -z 126719 ']'
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:26.323  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:26.323   05:00:40 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:26.323   05:00:40 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:26.582  [2024-11-20 05:00:40.318921] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:26.582  [2024-11-20 05:00:40.319244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126719 ]
00:10:26.582  [2024-11-20 05:00:40.468514] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:26.582  [2024-11-20 05:00:40.494669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:26.582  [2024-11-20 05:00:40.527440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:27.518   05:00:41 blockdev_general -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:27.518   05:00:41 blockdev_general -- common/autotest_common.sh@868 -- # return 0
00:10:27.518   05:00:41 blockdev_general -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:10:27.518   05:00:41 blockdev_general -- bdev/blockdev.sh@695 -- # setup_bdev_conf
00:10:27.518   05:00:41 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd
00:10:27.518   05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:27.518   05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:27.777  [2024-11-20 05:00:41.547088] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:27.777  [2024-11-20 05:00:41.547163] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:27.777  
00:10:27.777  [2024-11-20 05:00:41.555037] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:27.777  [2024-11-20 05:00:41.555118] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:27.777  
00:10:27.777  Malloc0
00:10:27.777  Malloc1
00:10:27.777  Malloc2
00:10:27.777  Malloc3
00:10:27.777  Malloc4
00:10:27.777  Malloc5
00:10:27.777  Malloc6
00:10:27.777  Malloc7
00:10:27.777  Malloc8
00:10:27.777  Malloc9
00:10:27.777  [2024-11-20 05:00:41.724289] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:27.777  [2024-11-20 05:00:41.724381] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:27.777  [2024-11-20 05:00:41.724426] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680
00:10:27.777  [2024-11-20 05:00:41.724453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:27.777  [2024-11-20 05:00:41.726711] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:27.777  [2024-11-20 05:00:41.726785] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:27.777  TestPT
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037   05:00:41 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:10:28.037  5000+0 records in
00:10:28.037  5000+0 records out
00:10:28.037  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0274004 s, 374 MB/s
00:10:28.037   05:00:41 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037  AIO0
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037   05:00:41 blockdev_general -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037   05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037   05:00:41 blockdev_general -- bdev/blockdev.sh@739 -- # cat
00:10:28.037    05:00:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037    05:00:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037    05:00:41 blockdev_general -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.037   05:00:41 blockdev_general -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:10:28.037    05:00:41 blockdev_general -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:10:28.037    05:00:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.037    05:00:41 blockdev_general -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:10:28.297    05:00:41 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:10:28.297   05:00:42 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:10:28.297    05:00:42 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r .name
00:10:28.299    05:00:42 blockdev_general -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "ee2a2068-f5b6-4218-9b38-43786a56a25b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "ee2a2068-f5b6-4218-9b38-43786a56a25b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "4b613879-0cfd-5662-b06f-1f38ad217b15"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "4b613879-0cfd-5662-b06f-1f38ad217b15",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "29ede4e2-5dfe-51a2-983d-8c3c11af1479"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "29ede4e2-5dfe-51a2-983d-8c3c11af1479",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "2f1081c4-6e06-56a0-a2e2-256cb6629107"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2f1081c4-6e06-56a0-a2e2-256cb6629107",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "3dfddc9f-602d-54df-834c-4119347c3a94"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3dfddc9f-602d-54df-834c-4119347c3a94",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "a092d8c3-10fb-581b-ba31-aaa23b451d8c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "a092d8c3-10fb-581b-ba31-aaa23b451d8c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "baa4a0f0-c399-5f1c-a51d-5052a470b73a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "baa4a0f0-c399-5f1c-a51d-5052a470b73a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "02653c46-a8c1-51e8-964b-ffaf7adb76c3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "02653c46-a8c1-51e8-964b-ffaf7adb76c3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "1b084445-4fc9-5c54-9257-0225e01bb346"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1b084445-4fc9-5c54-9257-0225e01bb346",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fcc03676-4e3b-519d-b924-fe1179ee7e2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fcc03676-4e3b-519d-b924-fe1179ee7e2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "50e3c69d-9ac5-56f2-8a16-948925865090"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "50e3c69d-9ac5-56f2-8a16-948925865090",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "f5eca368-2d76-4e20-9ec5-8806ca8133a8",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "9447db89-8841-4b03-ad7e-7af0df92f215",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "461c4102-e18f-42ff-acff-1dd270b2e803"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "5771bbcd-3b86-4266-83d7-6dbc37141fc9",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "548c016f-4204-4663-9be1-9ca15df866b6",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "906822be-d7b5-437a-8855-496f614673e5"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "7b15bcbd-ee24-4fd2-9a8b-054488f152e0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2cf4c52e-d1a5-42d4-b3fd-76f1c3020646",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "a6f177cf-8524-4367-bd94-de98d941b1b8"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "a6f177cf-8524-4367-bd94-de98d941b1b8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:10:28.299   05:00:42 blockdev_general -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:10:28.299   05:00:42 blockdev_general -- bdev/blockdev.sh@751 -- # hello_world_bdev=Malloc0
00:10:28.299   05:00:42 blockdev_general -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:10:28.299   05:00:42 blockdev_general -- bdev/blockdev.sh@753 -- # killprocess 126719
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@954 -- # '[' -z 126719 ']'
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@958 -- # kill -0 126719
00:10:28.299    05:00:42 blockdev_general -- common/autotest_common.sh@959 -- # uname
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:28.299    05:00:42 blockdev_general -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126719
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:28.299  killing process with pid 126719
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126719'
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@973 -- # kill 126719
00:10:28.299   05:00:42 blockdev_general -- common/autotest_common.sh@978 -- # wait 126719
00:10:28.867   05:00:42 blockdev_general -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:10:28.867   05:00:42 blockdev_general -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:10:28.867   05:00:42 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:10:28.867   05:00:42 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:28.867   05:00:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:28.867  ************************************
00:10:28.867  START TEST bdev_hello_world
00:10:28.867  ************************************
00:10:28.867   05:00:42 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:10:28.867  [2024-11-20 05:00:42.677587] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:28.867  [2024-11-20 05:00:42.677885] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126773 ]
00:10:29.126  [2024-11-20 05:00:42.828540] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:29.127  [2024-11-20 05:00:42.854982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:29.127  [2024-11-20 05:00:42.895240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:29.127  [2024-11-20 05:00:43.038997] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:29.127  [2024-11-20 05:00:43.039102] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:29.127  [2024-11-20 05:00:43.046919] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:29.127  [2024-11-20 05:00:43.046991] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:29.127  [2024-11-20 05:00:43.054960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:29.127  [2024-11-20 05:00:43.055022] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:29.127  [2024-11-20 05:00:43.055058] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:29.386  [2024-11-20 05:00:43.149691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:29.386  [2024-11-20 05:00:43.149803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:29.386  [2024-11-20 05:00:43.149842] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:10:29.386  [2024-11-20 05:00:43.149874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:29.386  [2024-11-20 05:00:43.152131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:29.386  [2024-11-20 05:00:43.152192] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:29.386  [2024-11-20 05:00:43.316436] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:10:29.386  [2024-11-20 05:00:43.316586] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:10:29.386  [2024-11-20 05:00:43.316758] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:10:29.386  [2024-11-20 05:00:43.316925] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:10:29.386  [2024-11-20 05:00:43.317098] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:10:29.386  [2024-11-20 05:00:43.317176] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:10:29.386  [2024-11-20 05:00:43.317353] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:10:29.386  
00:10:29.386  [2024-11-20 05:00:43.317508] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:10:29.953  
00:10:29.953  real	0m1.031s
00:10:29.953  user	0m0.572s
00:10:29.953  sys	0m0.308s
00:10:29.953   05:00:43 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:29.953  ************************************
00:10:29.953  END TEST bdev_hello_world
00:10:29.953  ************************************
00:10:29.953   05:00:43 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:10:29.953   05:00:43 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:10:29.953   05:00:43 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:29.953   05:00:43 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:29.953   05:00:43 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:29.953  ************************************
00:10:29.953  START TEST bdev_bounds
00:10:29.953  ************************************
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=126811
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 126811'
00:10:29.953  Process bdevio pid: 126811
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 126811
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 126811 ']'
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:29.953  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:29.953   05:00:43 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:10:29.953  [2024-11-20 05:00:43.764603] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:29.954  [2024-11-20 05:00:43.764897] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid126811 ]
00:10:30.213  [2024-11-20 05:00:43.931153] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:30.213  [2024-11-20 05:00:43.950113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:10:30.213  [2024-11-20 05:00:43.983287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:10:30.213  [2024-11-20 05:00:43.983421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:30.213  [2024-11-20 05:00:43.983658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:10:30.213  [2024-11-20 05:00:44.126894] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:30.213  [2024-11-20 05:00:44.127054] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:30.213  [2024-11-20 05:00:44.134790] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:30.213  [2024-11-20 05:00:44.134844] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:30.213  [2024-11-20 05:00:44.142889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:30.213  [2024-11-20 05:00:44.142965] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:30.213  [2024-11-20 05:00:44.143016] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:30.472  [2024-11-20 05:00:44.237905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:30.472  [2024-11-20 05:00:44.238004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:30.472  [2024-11-20 05:00:44.238063] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:10:30.472  [2024-11-20 05:00:44.238101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:30.472  [2024-11-20 05:00:44.240866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:30.472  [2024-11-20 05:00:44.240928] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:31.039   05:00:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:31.039   05:00:44 blockdev_general.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:10:31.039   05:00:44 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:10:31.039  I/O targets:
00:10:31.039    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:10:31.039    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:10:31.039    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:10:31.039    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:10:31.039    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:10:31.039    raid0: 131072 blocks of 512 bytes (64 MiB)
00:10:31.039    concat0: 131072 blocks of 512 bytes (64 MiB)
00:10:31.039    raid1: 65536 blocks of 512 bytes (32 MiB)
00:10:31.039    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:10:31.039  
00:10:31.039  
00:10:31.039       CUnit - A unit testing framework for C - Version 2.1-3
00:10:31.039       http://cunit.sourceforge.net/
00:10:31.039  
00:10:31.039  
00:10:31.039  Suite: bdevio tests on: AIO0
00:10:31.039    Test: blockdev write read block ...passed
00:10:31.039    Test: blockdev write zeroes read block ...passed
00:10:31.039    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: raid1
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: concat0
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: raid0
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: TestPT
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: Malloc2p7
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: Malloc2p6
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.040    Test: blockdev write zeroes read no split ...passed
00:10:31.040    Test: blockdev write zeroes read split ...passed
00:10:31.040    Test: blockdev write zeroes read split partial ...passed
00:10:31.040    Test: blockdev reset ...passed
00:10:31.040    Test: blockdev write read 8 blocks ...passed
00:10:31.040    Test: blockdev write read size > 128k ...passed
00:10:31.040    Test: blockdev write read invalid size ...passed
00:10:31.040    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.040    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.040    Test: blockdev write read max offset ...passed
00:10:31.040    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.040    Test: blockdev writev readv 8 blocks ...passed
00:10:31.040    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.040    Test: blockdev writev readv block ...passed
00:10:31.040    Test: blockdev writev readv size > 128k ...passed
00:10:31.040    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.040    Test: blockdev comparev and writev ...passed
00:10:31.040    Test: blockdev nvme passthru rw ...passed
00:10:31.040    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.040    Test: blockdev nvme admin passthru ...passed
00:10:31.040    Test: blockdev copy ...passed
00:10:31.040  Suite: bdevio tests on: Malloc2p5
00:10:31.040    Test: blockdev write read block ...passed
00:10:31.040    Test: blockdev write zeroes read block ...passed
00:10:31.300    Test: blockdev write zeroes read no split ...passed
00:10:31.300    Test: blockdev write zeroes read split ...passed
00:10:31.300    Test: blockdev write zeroes read split partial ...passed
00:10:31.300    Test: blockdev reset ...passed
00:10:31.300    Test: blockdev write read 8 blocks ...passed
00:10:31.300    Test: blockdev write read size > 128k ...passed
00:10:31.300    Test: blockdev write read invalid size ...passed
00:10:31.300    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.300    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.300    Test: blockdev write read max offset ...passed
00:10:31.300    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.300    Test: blockdev writev readv 8 blocks ...passed
00:10:31.300    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.300    Test: blockdev writev readv block ...passed
00:10:31.300    Test: blockdev writev readv size > 128k ...passed
00:10:31.300    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.300    Test: blockdev comparev and writev ...passed
00:10:31.300    Test: blockdev nvme passthru rw ...passed
00:10:31.300    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.300    Test: blockdev nvme admin passthru ...passed
00:10:31.300    Test: blockdev copy ...passed
00:10:31.300  Suite: bdevio tests on: Malloc2p4
00:10:31.300    Test: blockdev write read block ...passed
00:10:31.300    Test: blockdev write zeroes read block ...passed
00:10:31.300    Test: blockdev write zeroes read no split ...passed
00:10:31.300    Test: blockdev write zeroes read split ...passed
00:10:31.300    Test: blockdev write zeroes read split partial ...passed
00:10:31.300    Test: blockdev reset ...passed
00:10:31.300    Test: blockdev write read 8 blocks ...passed
00:10:31.300    Test: blockdev write read size > 128k ...passed
00:10:31.300    Test: blockdev write read invalid size ...passed
00:10:31.300    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.300    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.300    Test: blockdev write read max offset ...passed
00:10:31.300    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.300    Test: blockdev writev readv 8 blocks ...passed
00:10:31.300    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.300    Test: blockdev writev readv block ...passed
00:10:31.300    Test: blockdev writev readv size > 128k ...passed
00:10:31.300    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.300    Test: blockdev comparev and writev ...passed
00:10:31.300    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc2p3
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.301    Test: blockdev writev readv size > 128k ...passed
00:10:31.301    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.301    Test: blockdev comparev and writev ...passed
00:10:31.301    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc2p2
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.301    Test: blockdev writev readv size > 128k ...passed
00:10:31.301    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.301    Test: blockdev comparev and writev ...passed
00:10:31.301    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc2p1
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.301    Test: blockdev writev readv size > 128k ...passed
00:10:31.301    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.301    Test: blockdev comparev and writev ...passed
00:10:31.301    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc2p0
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.301    Test: blockdev writev readv size > 128k ...passed
00:10:31.301    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.301    Test: blockdev comparev and writev ...passed
00:10:31.301    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc1p1
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.301    Test: blockdev writev readv size > 128k ...passed
00:10:31.301    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.301    Test: blockdev comparev and writev ...passed
00:10:31.301    Test: blockdev nvme passthru rw ...passed
00:10:31.301    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.301    Test: blockdev nvme admin passthru ...passed
00:10:31.301    Test: blockdev copy ...passed
00:10:31.301  Suite: bdevio tests on: Malloc1p0
00:10:31.301    Test: blockdev write read block ...passed
00:10:31.301    Test: blockdev write zeroes read block ...passed
00:10:31.301    Test: blockdev write zeroes read no split ...passed
00:10:31.301    Test: blockdev write zeroes read split ...passed
00:10:31.301    Test: blockdev write zeroes read split partial ...passed
00:10:31.301    Test: blockdev reset ...passed
00:10:31.301    Test: blockdev write read 8 blocks ...passed
00:10:31.301    Test: blockdev write read size > 128k ...passed
00:10:31.301    Test: blockdev write read invalid size ...passed
00:10:31.301    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.301    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.301    Test: blockdev write read max offset ...passed
00:10:31.301    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.301    Test: blockdev writev readv 8 blocks ...passed
00:10:31.301    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.301    Test: blockdev writev readv block ...passed
00:10:31.302    Test: blockdev writev readv size > 128k ...passed
00:10:31.302    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.302    Test: blockdev comparev and writev ...passed
00:10:31.302    Test: blockdev nvme passthru rw ...passed
00:10:31.302    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.302    Test: blockdev nvme admin passthru ...passed
00:10:31.302    Test: blockdev copy ...passed
00:10:31.302  Suite: bdevio tests on: Malloc0
00:10:31.302    Test: blockdev write read block ...passed
00:10:31.302    Test: blockdev write zeroes read block ...passed
00:10:31.302    Test: blockdev write zeroes read no split ...passed
00:10:31.302    Test: blockdev write zeroes read split ...passed
00:10:31.302    Test: blockdev write zeroes read split partial ...passed
00:10:31.302    Test: blockdev reset ...passed
00:10:31.302    Test: blockdev write read 8 blocks ...passed
00:10:31.302    Test: blockdev write read size > 128k ...passed
00:10:31.302    Test: blockdev write read invalid size ...passed
00:10:31.302    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:10:31.302    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:10:31.302    Test: blockdev write read max offset ...passed
00:10:31.302    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:10:31.302    Test: blockdev writev readv 8 blocks ...passed
00:10:31.302    Test: blockdev writev readv 30 x 1block ...passed
00:10:31.302    Test: blockdev writev readv block ...passed
00:10:31.302    Test: blockdev writev readv size > 128k ...passed
00:10:31.302    Test: blockdev writev readv size > 128k in two iovs ...passed
00:10:31.302    Test: blockdev comparev and writev ...passed
00:10:31.302    Test: blockdev nvme passthru rw ...passed
00:10:31.302    Test: blockdev nvme passthru vendor specific ...passed
00:10:31.302    Test: blockdev nvme admin passthru ...passed
00:10:31.302    Test: blockdev copy ...passed
00:10:31.302  
00:10:31.302  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:10:31.302                suites     16     16    n/a      0        0
00:10:31.302                 tests    368    368    368      0        0
00:10:31.302               asserts   2224   2224   2224      0      n/a
00:10:31.302  
00:10:31.302  Elapsed time =    0.733 seconds
00:10:31.302  0
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 126811
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 126811 ']'
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 126811
00:10:31.302    05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:31.302    05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126811
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126811'
00:10:31.302  killing process with pid 126811
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # kill 126811
00:10:31.302   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@978 -- # wait 126811
00:10:31.561   05:00:45 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:10:31.561  
00:10:31.561  real	0m1.785s
00:10:31.561  user	0m4.173s
00:10:31.561  sys	0m0.493s
00:10:31.561   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:31.561   05:00:45 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:10:31.561  ************************************
00:10:31.561  END TEST bdev_bounds
00:10:31.561  ************************************
00:10:31.561   05:00:45 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:10:31.561   05:00:45 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:10:31.561   05:00:45 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:31.561   05:00:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:31.819  ************************************
00:10:31.819  START TEST bdev_nbd
00:10:31.819  ************************************
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:10:31.819    05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=16
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=16
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=126876
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 126876 /var/tmp/spdk-nbd.sock
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 126876 ']'
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:10:31.819  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:10:31.819   05:00:45 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:10:31.819  [2024-11-20 05:00:45.616967] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:10:31.819  [2024-11-20 05:00:45.617254] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:10:31.819  [2024-11-20 05:00:45.769382] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:10:32.078  [2024-11-20 05:00:45.795927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:10:32.078  [2024-11-20 05:00:45.832700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:10:32.078  [2024-11-20 05:00:45.971258] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:32.078  [2024-11-20 05:00:45.971362] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:10:32.078  [2024-11-20 05:00:45.979192] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:32.078  [2024-11-20 05:00:45.979240] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:10:32.078  [2024-11-20 05:00:45.987218] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:32.078  [2024-11-20 05:00:45.987276] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:10:32.078  [2024-11-20 05:00:45.987309] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:10:32.337  [2024-11-20 05:00:46.078850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:10:32.337  [2024-11-20 05:00:46.078938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:10:32.337  [2024-11-20 05:00:46.078989] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:10:32.337  [2024-11-20 05:00:46.079019] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:10:32.337  [2024-11-20 05:00:46.081383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:10:32.337  [2024-11-20 05:00:46.081434] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0'
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:10:32.596   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:32.596    05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:10:32.855    05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:32.855  1+0 records in
00:10:32.855  1+0 records out
00:10:32.855  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326913 s, 12.5 MB/s
00:10:32.855    05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:32.855   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.113   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:33.113   05:00:46 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:33.113   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:33.113   05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:33.113    05:00:46 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:10:33.372    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:33.372  1+0 records in
00:10:33.372  1+0 records out
00:10:33.372  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00076018 s, 5.4 MB/s
00:10:33.372    05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:33.372   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:33.372    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2
00:10:33.630    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:33.630  1+0 records in
00:10:33.630  1+0 records out
00:10:33.630  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671688 s, 6.1 MB/s
00:10:33.630    05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:33.630   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:33.630    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3
00:10:33.888    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:33.888   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:33.889  1+0 records in
00:10:33.889  1+0 records out
00:10:33.889  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405998 s, 10.1 MB/s
00:10:33.889    05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:33.889   05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:33.889    05:00:47 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4
00:10:34.147    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:34.147  1+0 records in
00:10:34.147  1+0 records out
00:10:34.147  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445969 s, 9.2 MB/s
00:10:34.147    05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:34.147   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:34.147    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2
00:10:34.405   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5
00:10:34.405    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5
00:10:34.405   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5
00:10:34.405   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:10:34.405   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:34.406   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:34.406  1+0 records in
00:10:34.406  1+0 records out
00:10:34.406  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401299 s, 10.2 MB/s
00:10:34.664    05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:34.664   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:34.664    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3
00:10:34.922   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6
00:10:34.922    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6
00:10:34.922   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:34.923  1+0 records in
00:10:34.923  1+0 records out
00:10:34.923  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814549 s, 5.0 MB/s
00:10:34.923    05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:34.923   05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:34.923    05:00:48 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7
00:10:35.181    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:35.181  1+0 records in
00:10:35.181  1+0 records out
00:10:35.181  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679922 s, 6.0 MB/s
00:10:35.181    05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:35.181   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:35.181    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8
00:10:35.439    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:35.439  1+0 records in
00:10:35.439  1+0 records out
00:10:35.439  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000690268 s, 5.9 MB/s
00:10:35.439    05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:35.439   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:35.439    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6
00:10:35.697   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9
00:10:35.698    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:35.698  1+0 records in
00:10:35.698  1+0 records out
00:10:35.698  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340801 s, 12.0 MB/s
00:10:35.698    05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:35.698   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:35.698    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10
00:10:35.956    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:35.956  1+0 records in
00:10:35.956  1+0 records out
00:10:35.956  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000682611 s, 6.0 MB/s
00:10:35.956    05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:35.956   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:35.957   05:00:49 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:35.957   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:35.957   05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:35.957    05:00:49 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT
00:10:36.523   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11
00:10:36.523    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11
00:10:36.523   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11
00:10:36.523   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:36.524  1+0 records in
00:10:36.524  1+0 records out
00:10:36.524  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778526 s, 5.3 MB/s
00:10:36.524    05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:36.524    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12
00:10:36.524    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:36.524  1+0 records in
00:10:36.524  1+0 records out
00:10:36.524  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105313 s, 3.9 MB/s
00:10:36.524    05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:36.524   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:36.524    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13
00:10:37.091    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:37.091  1+0 records in
00:10:37.091  1+0 records out
00:10:37.091  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000950916 s, 4.3 MB/s
00:10:37.091    05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:37.091   05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:37.091    05:00:50 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14
00:10:37.091    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:37.091   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:37.091  1+0 records in
00:10:37.091  1+0 records out
00:10:37.091  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894467 s, 4.6 MB/s
00:10:37.091    05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:37.349   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:37.349    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15
00:10:37.608    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:37.608  1+0 records in
00:10:37.608  1+0 records out
00:10:37.608  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120583 s, 3.4 MB/s
00:10:37.608    05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:10:37.608   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 16 ))
00:10:37.608    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:37.866   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd0",
00:10:37.866      "bdev_name": "Malloc0"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd1",
00:10:37.866      "bdev_name": "Malloc1p0"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd2",
00:10:37.866      "bdev_name": "Malloc1p1"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd3",
00:10:37.866      "bdev_name": "Malloc2p0"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd4",
00:10:37.866      "bdev_name": "Malloc2p1"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd5",
00:10:37.866      "bdev_name": "Malloc2p2"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd6",
00:10:37.866      "bdev_name": "Malloc2p3"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd7",
00:10:37.866      "bdev_name": "Malloc2p4"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd8",
00:10:37.866      "bdev_name": "Malloc2p5"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd9",
00:10:37.866      "bdev_name": "Malloc2p6"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd10",
00:10:37.866      "bdev_name": "Malloc2p7"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd11",
00:10:37.866      "bdev_name": "TestPT"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd12",
00:10:37.866      "bdev_name": "raid0"
00:10:37.866    },
00:10:37.866    {
00:10:37.866      "nbd_device": "/dev/nbd13",
00:10:37.867      "bdev_name": "concat0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd14",
00:10:37.867      "bdev_name": "raid1"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd15",
00:10:37.867      "bdev_name": "AIO0"
00:10:37.867    }
00:10:37.867  ]'
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:10:37.867    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd0",
00:10:37.867      "bdev_name": "Malloc0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd1",
00:10:37.867      "bdev_name": "Malloc1p0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd2",
00:10:37.867      "bdev_name": "Malloc1p1"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd3",
00:10:37.867      "bdev_name": "Malloc2p0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd4",
00:10:37.867      "bdev_name": "Malloc2p1"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd5",
00:10:37.867      "bdev_name": "Malloc2p2"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd6",
00:10:37.867      "bdev_name": "Malloc2p3"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd7",
00:10:37.867      "bdev_name": "Malloc2p4"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd8",
00:10:37.867      "bdev_name": "Malloc2p5"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd9",
00:10:37.867      "bdev_name": "Malloc2p6"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd10",
00:10:37.867      "bdev_name": "Malloc2p7"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd11",
00:10:37.867      "bdev_name": "TestPT"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd12",
00:10:37.867      "bdev_name": "raid0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd13",
00:10:37.867      "bdev_name": "concat0"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd14",
00:10:37.867      "bdev_name": "raid1"
00:10:37.867    },
00:10:37.867    {
00:10:37.867      "nbd_device": "/dev/nbd15",
00:10:37.867      "bdev_name": "AIO0"
00:10:37.867    }
00:10:37.867  ]'
00:10:37.867    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15'
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15')
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:37.867   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:38.125    05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:38.125   05:00:51 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:38.384    05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:38.384   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:10:38.644    05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:10:38.644    05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:38.644   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:10:38.905   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:38.905   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:38.905   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:38.905   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:10:38.905    05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:39.163   05:00:52 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:10:39.163    05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:39.163   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:10:39.422    05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:39.422   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:10:39.680    05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:39.680   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:10:39.939    05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:39.939   05:00:53 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:10:40.197    05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:40.197   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:10:40.455    05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:40.455   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:10:40.714    05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:40.714   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:10:40.972    05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:40.972   05:00:54 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:10:41.230    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:41.230   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:10:41.489    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:41.489   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:10:41.747    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:41.747   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:41.747    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:41.747    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:41.747     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:42.006    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:42.006     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:42.006     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:42.006    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:42.006     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:10:42.006     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:42.006     05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:10:42.006    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:10:42.006    05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0')
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:42.006   05:00:55 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0
00:10:42.573  /dev/nbd0
00:10:42.573    05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:42.573  1+0 records in
00:10:42.573  1+0 records out
00:10:42.573  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570665 s, 7.2 MB/s
00:10:42.573    05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:42.573   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1
00:10:42.832  /dev/nbd1
00:10:42.832    05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:42.832  1+0 records in
00:10:42.832  1+0 records out
00:10:42.832  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249981 s, 16.4 MB/s
00:10:42.832    05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:42.832   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10
00:10:43.090  /dev/nbd10
00:10:43.090    05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:43.090  1+0 records in
00:10:43.090  1+0 records out
00:10:43.090  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000644618 s, 6.4 MB/s
00:10:43.090    05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:43.090   05:00:56 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11
00:10:43.349  /dev/nbd11
00:10:43.349    05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:43.349  1+0 records in
00:10:43.349  1+0 records out
00:10:43.349  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641905 s, 6.4 MB/s
00:10:43.349    05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:43.349   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12
00:10:43.608  /dev/nbd12
00:10:43.608    05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:43.608  1+0 records in
00:10:43.608  1+0 records out
00:10:43.608  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000587658 s, 7.0 MB/s
00:10:43.608    05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:43.608   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13
00:10:43.866  /dev/nbd13
00:10:43.867    05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:43.867  1+0 records in
00:10:43.867  1+0 records out
00:10:43.867  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462352 s, 8.9 MB/s
00:10:43.867    05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:43.867   05:00:57 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14
00:10:44.125  /dev/nbd14
00:10:44.125    05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:44.125  1+0 records in
00:10:44.125  1+0 records out
00:10:44.125  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000604705 s, 6.8 MB/s
00:10:44.125    05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:44.125   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15
00:10:44.383  /dev/nbd15
00:10:44.383    05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd15
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd15 /proc/partitions
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:44.383   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd15 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:44.384  1+0 records in
00:10:44.384  1+0 records out
00:10:44.384  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616091 s, 6.6 MB/s
00:10:44.384    05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:44.384   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2
00:10:44.642  /dev/nbd2
00:10:44.642    05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:44.642  1+0 records in
00:10:44.642  1+0 records out
00:10:44.642  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906869 s, 4.5 MB/s
00:10:44.642    05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:44.642   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3
00:10:44.900  /dev/nbd3
00:10:44.900    05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:44.900  1+0 records in
00:10:44.900  1+0 records out
00:10:44.900  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000755159 s, 5.4 MB/s
00:10:44.900    05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:44.900   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:44.901   05:00:58 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:44.901   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:44.901   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:44.901   05:00:58 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4
00:10:45.159  /dev/nbd4
00:10:45.159    05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:45.160  1+0 records in
00:10:45.160  1+0 records out
00:10:45.160  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000663455 s, 6.2 MB/s
00:10:45.160    05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:45.160   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5
00:10:45.417  /dev/nbd5
00:10:45.417    05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:45.417  1+0 records in
00:10:45.417  1+0 records out
00:10:45.417  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000565421 s, 7.2 MB/s
00:10:45.417    05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:45.417   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6
00:10:45.674  /dev/nbd6
00:10:45.674    05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:45.674  1+0 records in
00:10:45.674  1+0 records out
00:10:45.674  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000585918 s, 7.0 MB/s
00:10:45.674    05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:45.674   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7
00:10:45.932  /dev/nbd7
00:10:45.932    05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd7
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd7 /proc/partitions
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd7 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:45.932  1+0 records in
00:10:45.932  1+0 records out
00:10:45.932  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944598 s, 4.3 MB/s
00:10:45.932    05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:45.932   05:00:59 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8
00:10:46.191  /dev/nbd8
00:10:46.191    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd8
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd8 /proc/partitions
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd8 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:46.191  1+0 records in
00:10:46.191  1+0 records out
00:10:46.191  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814351 s, 5.0 MB/s
00:10:46.191    05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:46.191   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9
00:10:46.449  /dev/nbd9
00:10:46.708    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd9
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd9 /proc/partitions
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd9 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:10:46.708  1+0 records in
00:10:46.708  1+0 records out
00:10:46.708  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105554 s, 3.9 MB/s
00:10:46.708    05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:10:46.708   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 16 ))
00:10:46.708    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:46.708    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:46.708     05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:46.708    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd0",
00:10:46.708      "bdev_name": "Malloc0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd1",
00:10:46.708      "bdev_name": "Malloc1p0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd10",
00:10:46.708      "bdev_name": "Malloc1p1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd11",
00:10:46.708      "bdev_name": "Malloc2p0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd12",
00:10:46.708      "bdev_name": "Malloc2p1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd13",
00:10:46.708      "bdev_name": "Malloc2p2"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd14",
00:10:46.708      "bdev_name": "Malloc2p3"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd15",
00:10:46.708      "bdev_name": "Malloc2p4"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd2",
00:10:46.708      "bdev_name": "Malloc2p5"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd3",
00:10:46.708      "bdev_name": "Malloc2p6"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd4",
00:10:46.708      "bdev_name": "Malloc2p7"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd5",
00:10:46.708      "bdev_name": "TestPT"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd6",
00:10:46.708      "bdev_name": "raid0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd7",
00:10:46.708      "bdev_name": "concat0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd8",
00:10:46.708      "bdev_name": "raid1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd9",
00:10:46.708      "bdev_name": "AIO0"
00:10:46.708    }
00:10:46.708  ]'
00:10:46.708     05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd0",
00:10:46.708      "bdev_name": "Malloc0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd1",
00:10:46.708      "bdev_name": "Malloc1p0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd10",
00:10:46.708      "bdev_name": "Malloc1p1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd11",
00:10:46.708      "bdev_name": "Malloc2p0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd12",
00:10:46.708      "bdev_name": "Malloc2p1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd13",
00:10:46.708      "bdev_name": "Malloc2p2"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd14",
00:10:46.708      "bdev_name": "Malloc2p3"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd15",
00:10:46.708      "bdev_name": "Malloc2p4"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd2",
00:10:46.708      "bdev_name": "Malloc2p5"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd3",
00:10:46.708      "bdev_name": "Malloc2p6"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd4",
00:10:46.708      "bdev_name": "Malloc2p7"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd5",
00:10:46.708      "bdev_name": "TestPT"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd6",
00:10:46.708      "bdev_name": "raid0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd7",
00:10:46.708      "bdev_name": "concat0"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd8",
00:10:46.708      "bdev_name": "raid1"
00:10:46.708    },
00:10:46.708    {
00:10:46.708      "nbd_device": "/dev/nbd9",
00:10:46.708      "bdev_name": "AIO0"
00:10:46.708    }
00:10:46.708  ]'
00:10:46.708     05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:46.966    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:10:46.966  /dev/nbd1
00:10:46.966  /dev/nbd10
00:10:46.966  /dev/nbd11
00:10:46.966  /dev/nbd12
00:10:46.966  /dev/nbd13
00:10:46.966  /dev/nbd14
00:10:46.966  /dev/nbd15
00:10:46.966  /dev/nbd2
00:10:46.966  /dev/nbd3
00:10:46.966  /dev/nbd4
00:10:46.966  /dev/nbd5
00:10:46.966  /dev/nbd6
00:10:46.966  /dev/nbd7
00:10:46.966  /dev/nbd8
00:10:46.966  /dev/nbd9'
00:10:46.966     05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:10:46.966  /dev/nbd1
00:10:46.966  /dev/nbd10
00:10:46.966  /dev/nbd11
00:10:46.966  /dev/nbd12
00:10:46.966  /dev/nbd13
00:10:46.966  /dev/nbd14
00:10:46.966  /dev/nbd15
00:10:46.966  /dev/nbd2
00:10:46.966  /dev/nbd3
00:10:46.966  /dev/nbd4
00:10:46.966  /dev/nbd5
00:10:46.966  /dev/nbd6
00:10:46.966  /dev/nbd7
00:10:46.966  /dev/nbd8
00:10:46.966  /dev/nbd9'
00:10:46.966     05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:46.966    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=16
00:10:46.966    05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 16
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=16
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']'
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:10:46.966  256+0 records in
00:10:46.966  256+0 records out
00:10:46.966  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00994074 s, 105 MB/s
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:10:46.966  256+0 records in
00:10:46.966  256+0 records out
00:10:46.966  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.130617 s, 8.0 MB/s
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:46.966   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:10:47.225  256+0 records in
00:10:47.225  256+0 records out
00:10:47.225  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139075 s, 7.5 MB/s
00:10:47.225   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.225   05:01:00 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct
00:10:47.225  256+0 records in
00:10:47.225  256+0 records out
00:10:47.225  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.160099 s, 6.5 MB/s
00:10:47.225   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.225   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct
00:10:47.483  256+0 records in
00:10:47.483  256+0 records out
00:10:47.483  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126836 s, 8.3 MB/s
00:10:47.483   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.483   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct
00:10:47.483  256+0 records in
00:10:47.483  256+0 records out
00:10:47.483  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135045 s, 7.8 MB/s
00:10:47.483   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.483   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct
00:10:47.741  256+0 records in
00:10:47.741  256+0 records out
00:10:47.741  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135597 s, 7.7 MB/s
00:10:47.741   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.741   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct
00:10:47.999  256+0 records in
00:10:47.999  256+0 records out
00:10:47.999  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125184 s, 8.4 MB/s
00:10:47.999   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.999   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct
00:10:47.999  256+0 records in
00:10:47.999  256+0 records out
00:10:47.999  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124643 s, 8.4 MB/s
00:10:47.999   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:47.999   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct
00:10:48.257  256+0 records in
00:10:48.257  256+0 records out
00:10:48.257  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12447 s, 8.4 MB/s
00:10:48.257   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.257   05:01:01 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct
00:10:48.257  256+0 records in
00:10:48.257  256+0 records out
00:10:48.257  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137618 s, 7.6 MB/s
00:10:48.257   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.257   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct
00:10:48.515  256+0 records in
00:10:48.515  256+0 records out
00:10:48.515  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145762 s, 7.2 MB/s
00:10:48.515   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.515   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct
00:10:48.515  256+0 records in
00:10:48.515  256+0 records out
00:10:48.515  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128053 s, 8.2 MB/s
00:10:48.515   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.515   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct
00:10:48.773  256+0 records in
00:10:48.773  256+0 records out
00:10:48.773  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125204 s, 8.4 MB/s
00:10:48.773   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.773   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct
00:10:48.773  256+0 records in
00:10:48.773  256+0 records out
00:10:48.773  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.138923 s, 7.5 MB/s
00:10:48.773   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:48.773   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct
00:10:49.031  256+0 records in
00:10:49.031  256+0 records out
00:10:49.031  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.144484 s, 7.3 MB/s
00:10:49.031   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:10:49.031   05:01:02 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct
00:10:49.290  256+0 records in
00:10:49.290  256+0 records out
00:10:49.290  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.213754 s, 4.9 MB/s
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd15
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd2
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd3
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd4
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd5
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd6
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd7
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd8
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd9
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9'
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:49.290   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:49.549    05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:49.549   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:10:49.807    05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:50.066   05:01:03 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10
00:10:50.324    05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:50.325   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11
00:10:50.583    05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:50.583   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12
00:10:50.841    05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13
00:10:50.841    05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13
00:10:50.841   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14
00:10:51.099    05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14
00:10:51.099   05:01:04 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:51.099   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15
00:10:51.357    05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:51.357   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2
00:10:51.615    05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:51.615   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3
00:10:51.873    05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:51.873   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4
00:10:52.132    05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:52.132   05:01:05 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5
00:10:52.391    05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:52.391   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6
00:10:52.650    05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:52.650   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7
00:10:52.909    05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8
00:10:52.909    05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:52.909   05:01:06 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9
00:10:53.168    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9
00:10:53.168   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9
00:10:53.168   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9
00:10:53.168   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:53.168   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:53.168   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions
00:10:53.426   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:53.426   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:53.426    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:10:53.426    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:53.426     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:10:53.426    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:10:53.426     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:10:53.426     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:10:53.685    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:10:53.685     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:10:53.685     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:10:53.685     05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:10:53.685    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:10:53.685    05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:10:53.685  malloc_lvol_verify
00:10:53.685   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:10:53.943  5b27637f-8539-4db5-8b3c-8281c923947c
00:10:53.943   05:01:07 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:10:54.201  c2187a0a-8eb1-437f-b8f6-987e3ccfb366
00:10:54.201   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:10:54.459  /dev/nbd0
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:10:54.459  mke2fs 1.46.5 (30-Dec-2021)
00:10:54.459  
00:10:54.459  Filesystem too small for a journal
00:10:54.459  Discarding device blocks:    0/1024         done                            
00:10:54.459  Creating filesystem with 1024 4k blocks and 1024 inodes
00:10:54.459  
00:10:54.459  Allocating group tables: 0/1   done                            
00:10:54.459  Writing inode tables: 0/1   done                            
00:10:54.459  Writing superblocks and filesystem accounting information: 0/1   done
00:10:54.459  
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:10:54.459   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:10:54.717    05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 126876
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 126876 ']'
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 126876
00:10:54.717    05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:10:54.717    05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126876
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:10:54.717  killing process with pid 126876
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126876'
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@973 -- # kill 126876
00:10:54.717   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@978 -- # wait 126876
00:10:55.282   05:01:08 blockdev_general.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:10:55.282  
00:10:55.282  real	0m23.406s
00:10:55.282  user	0m32.944s
00:10:55.282  sys	0m8.901s
00:10:55.282   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:10:55.282   05:01:08 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:10:55.282  ************************************
00:10:55.282  END TEST bdev_nbd
00:10:55.282  ************************************
00:10:55.282   05:01:08 blockdev_general -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:10:55.282   05:01:08 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = nvme ']'
00:10:55.282   05:01:08 blockdev_general -- bdev/blockdev.sh@763 -- # '[' bdev = gpt ']'
00:10:55.282   05:01:08 blockdev_general -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite ''
00:10:55.282   05:01:08 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:10:55.282   05:01:08 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:55.282   05:01:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:10:55.282  ************************************
00:10:55.282  START TEST bdev_fio
00:10:55.282  ************************************
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:10:55.282  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:10:55.282    05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:10:55.282    05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:10:55.282   05:01:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:10:55.282    05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]'
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]'
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0
00:10:55.282   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:10:55.283   05:01:09 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:10:55.283  ************************************
00:10:55.283  START TEST bdev_fio_rw_verify
00:10:55.283  ************************************
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:10:55.283    05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:10:55.283    05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:10:55.283    05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:10:55.283   05:01:09 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:10:55.543  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:10:55.543  fio-3.35
00:10:55.543  Starting 16 threads
00:11:07.751  
00:11:07.751  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=128019: Wed Nov 20 05:01:20 2024
00:11:07.751    read: IOPS=79.7k, BW=311MiB/s (326MB/s)(3112MiB/10002msec)
00:11:07.751      slat (usec): min=2, max=44064, avg=33.95, stdev=445.25
00:11:07.751      clat (usec): min=9, max=44359, avg=283.08, stdev=1288.50
00:11:07.751       lat (usec): min=26, max=44390, avg=317.03, stdev=1362.61
00:11:07.751      clat percentiles (usec):
00:11:07.751       | 50.000th=[  165], 99.000th=[  619], 99.900th=[16319], 99.990th=[24511],
00:11:07.751       | 99.999th=[40109]
00:11:07.751    write: IOPS=127k, BW=495MiB/s (519MB/s)(4878MiB/9856msec); 0 zone resets
00:11:07.751      slat (usec): min=4, max=69292, avg=63.90, stdev=662.36
00:11:07.751      clat (usec): min=9, max=69650, avg=372.30, stdev=1528.80
00:11:07.751       lat (usec): min=38, max=69681, avg=436.20, stdev=1665.63
00:11:07.751      clat percentiles (usec):
00:11:07.751       | 50.000th=[  212], 99.000th=[ 5604], 99.900th=[17171], 99.990th=[32900],
00:11:07.751       | 99.999th=[61080]
00:11:07.751     bw (  KiB/s): min=319034, max=817304, per=99.54%, avg=504427.26, stdev=8337.47, samples=304
00:11:07.751     iops        : min=79758, max=204326, avg=126106.63, stdev=2084.37, samples=304
00:11:07.751    lat (usec)   : 10=0.01%, 20=0.01%, 50=0.44%, 100=12.56%, 250=59.33%
00:11:07.751    lat (usec)   : 500=24.91%, 750=1.56%, 1000=0.07%
00:11:07.751    lat (msec)   : 2=0.07%, 4=0.09%, 10=0.20%, 20=0.68%, 50=0.07%
00:11:07.751    lat (msec)   : 100=0.01%
00:11:07.751    cpu          : usr=55.98%, sys=2.18%, ctx=232326, majf=2, minf=96835
00:11:07.751    IO depths    : 1=11.2%, 2=23.3%, 4=52.1%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:07.751       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:07.751       complete  : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:07.751       issued rwts: total=796795,1248692,0,0 short=0,0,0,0 dropped=0,0,0,0
00:11:07.751       latency   : target=0, window=0, percentile=100.00%, depth=8
00:11:07.751  
00:11:07.751  Run status group 0 (all jobs):
00:11:07.751     READ: bw=311MiB/s (326MB/s), 311MiB/s-311MiB/s (326MB/s-326MB/s), io=3112MiB (3264MB), run=10002-10002msec
00:11:07.751    WRITE: bw=495MiB/s (519MB/s), 495MiB/s-495MiB/s (519MB/s-519MB/s), io=4878MiB (5115MB), run=9856-9856msec
00:11:07.751  -----------------------------------------------------
00:11:07.751  Suppressions used:
00:11:07.751    count      bytes template
00:11:07.751       16        140 /usr/src/fio/parse.c
00:11:07.751    11219    1077024 /usr/src/fio/iolog.c
00:11:07.751        1        904 libcrypto.so
00:11:07.751  -----------------------------------------------------
00:11:07.751  
00:11:07.751  
00:11:07.751  real	0m11.827s
00:11:07.751  user	1m32.417s
00:11:07.751  sys	0m4.253s
00:11:07.751   05:01:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:07.751   05:01:20 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:11:07.751  ************************************
00:11:07.751  END TEST bdev_fio_rw_verify
00:11:07.751  ************************************
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:11:07.751   05:01:20 blockdev_general.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:11:07.751    05:01:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:11:07.753    05:01:20 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "ee2a2068-f5b6-4218-9b38-43786a56a25b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "ee2a2068-f5b6-4218-9b38-43786a56a25b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "4b613879-0cfd-5662-b06f-1f38ad217b15"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "4b613879-0cfd-5662-b06f-1f38ad217b15",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "29ede4e2-5dfe-51a2-983d-8c3c11af1479"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "29ede4e2-5dfe-51a2-983d-8c3c11af1479",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "2f1081c4-6e06-56a0-a2e2-256cb6629107"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2f1081c4-6e06-56a0-a2e2-256cb6629107",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "3dfddc9f-602d-54df-834c-4119347c3a94"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3dfddc9f-602d-54df-834c-4119347c3a94",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "a092d8c3-10fb-581b-ba31-aaa23b451d8c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "a092d8c3-10fb-581b-ba31-aaa23b451d8c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "baa4a0f0-c399-5f1c-a51d-5052a470b73a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "baa4a0f0-c399-5f1c-a51d-5052a470b73a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "02653c46-a8c1-51e8-964b-ffaf7adb76c3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "02653c46-a8c1-51e8-964b-ffaf7adb76c3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "1b084445-4fc9-5c54-9257-0225e01bb346"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1b084445-4fc9-5c54-9257-0225e01bb346",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fcc03676-4e3b-519d-b924-fe1179ee7e2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fcc03676-4e3b-519d-b924-fe1179ee7e2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "50e3c69d-9ac5-56f2-8a16-948925865090"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "50e3c69d-9ac5-56f2-8a16-948925865090",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "f5eca368-2d76-4e20-9ec5-8806ca8133a8",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "9447db89-8841-4b03-ad7e-7af0df92f215",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "461c4102-e18f-42ff-acff-1dd270b2e803"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "5771bbcd-3b86-4266-83d7-6dbc37141fc9",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "548c016f-4204-4663-9be1-9ca15df866b6",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "906822be-d7b5-437a-8855-496f614673e5"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "7b15bcbd-ee24-4fd2-9a8b-054488f152e0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2cf4c52e-d1a5-42d4-b3fd-76f1c3020646",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "a6f177cf-8524-4367-bd94-de98d941b1b8"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "a6f177cf-8524-4367-bd94-de98d941b1b8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:11:07.753   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0
00:11:07.753  Malloc1p0
00:11:07.753  Malloc1p1
00:11:07.753  Malloc2p0
00:11:07.753  Malloc2p1
00:11:07.753  Malloc2p2
00:11:07.753  Malloc2p3
00:11:07.753  Malloc2p4
00:11:07.753  Malloc2p5
00:11:07.753  Malloc2p6
00:11:07.753  Malloc2p7
00:11:07.753  TestPT
00:11:07.753  raid0
00:11:07.753  concat0 ]]
00:11:07.753    05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:11:07.754    05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "ee2a2068-f5b6-4218-9b38-43786a56a25b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "ee2a2068-f5b6-4218-9b38-43786a56a25b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "4b613879-0cfd-5662-b06f-1f38ad217b15"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "4b613879-0cfd-5662-b06f-1f38ad217b15",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "29ede4e2-5dfe-51a2-983d-8c3c11af1479"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "29ede4e2-5dfe-51a2-983d-8c3c11af1479",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "2f1081c4-6e06-56a0-a2e2-256cb6629107"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "2f1081c4-6e06-56a0-a2e2-256cb6629107",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "3dfddc9f-602d-54df-834c-4119347c3a94"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "3dfddc9f-602d-54df-834c-4119347c3a94",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "a092d8c3-10fb-581b-ba31-aaa23b451d8c"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "a092d8c3-10fb-581b-ba31-aaa23b451d8c",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "baa4a0f0-c399-5f1c-a51d-5052a470b73a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "baa4a0f0-c399-5f1c-a51d-5052a470b73a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "02653c46-a8c1-51e8-964b-ffaf7adb76c3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "02653c46-a8c1-51e8-964b-ffaf7adb76c3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "1b084445-4fc9-5c54-9257-0225e01bb346"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1b084445-4fc9-5c54-9257-0225e01bb346",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "fcc03676-4e3b-519d-b924-fe1179ee7e2b"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "fcc03676-4e3b-519d-b924-fe1179ee7e2b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "d8fd8204-299f-5ae4-9d34-0e8ed21c4ef3",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "50e3c69d-9ac5-56f2-8a16-948925865090"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "50e3c69d-9ac5-56f2-8a16-948925865090",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "cef2d010-c7fa-4a5b-8d7b-a506faf37e2e",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "f5eca368-2d76-4e20-9ec5-8806ca8133a8",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "9447db89-8841-4b03-ad7e-7af0df92f215",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "461c4102-e18f-42ff-acff-1dd270b2e803"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "461c4102-e18f-42ff-acff-1dd270b2e803",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "5771bbcd-3b86-4266-83d7-6dbc37141fc9",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "548c016f-4204-4663-9be1-9ca15df866b6",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "906822be-d7b5-437a-8855-496f614673e5"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "906822be-d7b5-437a-8855-496f614673e5",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "7b15bcbd-ee24-4fd2-9a8b-054488f152e0",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2cf4c52e-d1a5-42d4-b3fd-76f1c3020646",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "a6f177cf-8524-4367-bd94-de98d941b1b8"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "a6f177cf-8524-4367-bd94-de98d941b1b8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:07.754   05:01:21 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:11:07.754  ************************************
00:11:07.754  START TEST bdev_fio_trim
00:11:07.754  ************************************
00:11:07.754   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local sanitizers
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # shift
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # local asan_lib=
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:11:07.755    05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:11:07.755    05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libasan
00:11:07.755    05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1351 -- # break
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:11:07.755   05:01:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:11:07.755  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:11:07.755  fio-3.35
00:11:07.755  Starting 14 threads
00:11:19.962  
00:11:19.962  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=128215: Wed Nov 20 05:01:32 2024
00:11:19.962    write: IOPS=148k, BW=580MiB/s (608MB/s)(5804MiB/10012msec); 0 zone resets
00:11:19.962      slat (usec): min=2, max=28031, avg=33.57, stdev=388.96
00:11:19.962      clat (usec): min=27, max=34640, avg=243.81, stdev=1070.12
00:11:19.962       lat (usec): min=40, max=34659, avg=277.38, stdev=1137.88
00:11:19.962      clat percentiles (usec):
00:11:19.962       | 50.000th=[  159], 99.000th=[  482], 99.900th=[16188], 99.990th=[20317],
00:11:19.962       | 99.999th=[28181]
00:11:19.962     bw (  KiB/s): min=350728, max=892608, per=100.00%, avg=596479.49, stdev=12329.26, samples=268
00:11:19.962     iops        : min=87682, max=223152, avg=149119.81, stdev=3082.32, samples=268
00:11:19.962    trim: IOPS=148k, BW=580MiB/s (608MB/s)(5804MiB/10012msec); 0 zone resets
00:11:19.962      slat (usec): min=4, max=34421, avg=22.00, stdev=311.07
00:11:19.962      clat (usec): min=4, max=34659, avg=256.97, stdev=1071.43
00:11:19.962       lat (usec): min=14, max=34672, avg=278.96, stdev=1115.46
00:11:19.962      clat percentiles (usec):
00:11:19.962       | 50.000th=[  180], 99.000th=[  388], 99.900th=[16188], 99.990th=[20317],
00:11:19.962       | 99.999th=[28181]
00:11:19.962     bw (  KiB/s): min=350728, max=892576, per=100.00%, avg=596479.91, stdev=12328.90, samples=268
00:11:19.962     iops        : min=87682, max=223144, avg=149119.81, stdev=3082.22, samples=268
00:11:19.962    lat (usec)   : 10=0.10%, 20=0.29%, 50=0.98%, 100=10.74%, 250=73.76%
00:11:19.962    lat (usec)   : 500=13.45%, 750=0.15%, 1000=0.01%
00:11:19.962    lat (msec)   : 2=0.01%, 4=0.02%, 10=0.04%, 20=0.43%, 50=0.01%
00:11:19.962    cpu          : usr=68.87%, sys=0.69%, ctx=167773, majf=0, minf=9138
00:11:19.962    IO depths    : 1=12.3%, 2=24.6%, 4=50.0%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0%
00:11:19.962       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:19.962       complete  : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:11:19.962       issued rwts: total=0,1485795,1485798,0 short=0,0,0,0 dropped=0,0,0,0
00:11:19.962       latency   : target=0, window=0, percentile=100.00%, depth=8
00:11:19.962  
00:11:19.962  Run status group 0 (all jobs):
00:11:19.962    WRITE: bw=580MiB/s (608MB/s), 580MiB/s-580MiB/s (608MB/s-608MB/s), io=5804MiB (6086MB), run=10012-10012msec
00:11:19.962     TRIM: bw=580MiB/s (608MB/s), 580MiB/s-580MiB/s (608MB/s-608MB/s), io=5804MiB (6086MB), run=10012-10012msec
00:11:19.962  -----------------------------------------------------
00:11:19.962  Suppressions used:
00:11:19.962    count      bytes template
00:11:19.962       14        129 /usr/src/fio/parse.c
00:11:19.962        1        904 libcrypto.so
00:11:19.962  -----------------------------------------------------
00:11:19.962  
00:11:19.962  
00:11:19.962  real	0m11.513s
00:11:19.962  user	1m38.810s
00:11:19.962  sys	0m1.891s
00:11:19.962   05:01:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:19.962   05:01:32 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x
00:11:19.962  ************************************
00:11:19.962  END TEST bdev_fio_trim
00:11:19.962  ************************************
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:11:19.962  /home/vagrant/spdk_repo/spdk
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:11:19.962  
00:11:19.962  real	0m23.638s
00:11:19.962  user	3m11.433s
00:11:19.962  sys	0m6.233s
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:19.962   05:01:32 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:11:19.962  ************************************
00:11:19.962  END TEST bdev_fio
00:11:19.962  ************************************
00:11:19.962   05:01:32 blockdev_general -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:11:19.962   05:01:32 blockdev_general -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:11:19.962   05:01:32 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:11:19.963   05:01:32 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:19.963   05:01:32 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:19.963  ************************************
00:11:19.963  START TEST bdev_verify
00:11:19.963  ************************************
00:11:19.963   05:01:32 blockdev_general.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:11:19.963  [2024-11-20 05:01:32.746951] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:19.963  [2024-11-20 05:01:32.747196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128383 ]
00:11:19.963  [2024-11-20 05:01:32.893443] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:19.963  [2024-11-20 05:01:32.914967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:19.963  [2024-11-20 05:01:32.946077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:19.963  [2024-11-20 05:01:32.946076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:19.963  [2024-11-20 05:01:33.084796] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:19.963  [2024-11-20 05:01:33.084950] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:19.963  [2024-11-20 05:01:33.092697] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:19.963  [2024-11-20 05:01:33.092787] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:19.963  [2024-11-20 05:01:33.100767] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:19.963  [2024-11-20 05:01:33.100883] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:19.963  [2024-11-20 05:01:33.100928] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:19.963  [2024-11-20 05:01:33.211586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:19.963  [2024-11-20 05:01:33.211724] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:19.963  [2024-11-20 05:01:33.211807] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:11:19.963  [2024-11-20 05:01:33.211848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:19.963  [2024-11-20 05:01:33.214509] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:19.963  [2024-11-20 05:01:33.214582] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:19.963  Running I/O for 5 seconds...
00:11:24.395      39808.00 IOPS,   155.50 MiB/s
[2024-11-20T05:01:38.919Z]     50368.00 IOPS,   196.75 MiB/s
[2024-11-20T05:01:38.919Z]     49714.67 IOPS,   194.20 MiB/s
00:11:24.962                                                                                                  Latency(us)
00:11:24.962  
[2024-11-20T05:01:38.919Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:24.962  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x1000
00:11:24.962  	 Malloc0             :       5.05    1674.05       6.54       0.00     0.00   76370.58     437.53  245938.73
00:11:24.962  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x1000 length 0x1000
00:11:24.962  	 Malloc0             :       5.13    1623.20       6.34       0.00     0.00   78749.46     422.63  287881.77
00:11:24.962  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x800
00:11:24.962  	 Malloc1p0           :       5.05     862.11       3.37       0.00     0.00  148090.01    2606.55  134408.38
00:11:24.962  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x800 length 0x800
00:11:24.962  	 Malloc1p0           :       5.13     848.80       3.32       0.00     0.00  150406.82    2576.76  134408.38
00:11:24.962  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x800
00:11:24.962  	 Malloc1p1           :       5.17     867.17       3.39       0.00     0.00  147003.33    2398.02  131548.63
00:11:24.962  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x800 length 0x800
00:11:24.962  	 Malloc1p1           :       5.13     848.54       3.31       0.00     0.00  150203.52    2398.02  132501.88
00:11:24.962  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.962  	 Malloc2p0           :       5.17     866.85       3.39       0.00     0.00  146823.03    2293.76  129642.12
00:11:24.962  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x200 length 0x200
00:11:24.962  	 Malloc2p0           :       5.13     848.29       3.31       0.00     0.00  150002.14    2323.55  129642.12
00:11:24.962  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.962  	 Malloc2p1           :       5.17     866.35       3.38       0.00     0.00  146698.39    2204.39  127735.62
00:11:24.962  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x200 length 0x200
00:11:24.962  	 Malloc2p1           :       5.13     848.03       3.31       0.00     0.00  149825.38    2189.50  127735.62
00:11:24.962  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.962  	 Malloc2p2           :       5.17     866.04       3.38       0.00     0.00  146542.49    2085.24  125829.12
00:11:24.962  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x200 length 0x200
00:11:24.962  	 Malloc2p2           :       5.13     847.77       3.31       0.00     0.00  149641.50    2100.13  125829.12
00:11:24.962  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.962  	 Malloc2p3           :       5.18     865.52       3.38       0.00     0.00  146421.33    1995.87  123922.62
00:11:24.962  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x200 length 0x200
00:11:24.962  	 Malloc2p3           :       5.14     847.49       3.31       0.00     0.00  149458.78    2010.76  123922.62
00:11:24.962  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.962  	 Malloc2p4           :       5.18     865.28       3.38       0.00     0.00  146256.45    1891.61  122969.37
00:11:24.962  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x200 length 0x200
00:11:24.962  	 Malloc2p4           :       5.14     847.20       3.31       0.00     0.00  149295.12    1966.08  121539.49
00:11:24.962  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.962  	 Verification LBA range: start 0x0 length 0x200
00:11:24.963  	 Malloc2p5           :       5.18     865.03       3.38       0.00     0.00  146091.82    1884.16  120586.24
00:11:24.963  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x200 length 0x200
00:11:24.963  	 Malloc2p5           :       5.14     846.90       3.31       0.00     0.00  149130.89    1861.82  119632.99
00:11:24.963  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x200
00:11:24.963  	 Malloc2p6           :       5.18     864.80       3.38       0.00     0.00  145936.07    1876.71  119156.36
00:11:24.963  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x200 length 0x200
00:11:24.963  	 Malloc2p6           :       5.14     846.61       3.31       0.00     0.00  148976.39    1884.16  118203.11
00:11:24.963  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x200
00:11:24.963  	 Malloc2p7           :       5.18     864.55       3.38       0.00     0.00  145785.06    1869.27  117726.49
00:11:24.963  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x200 length 0x200
00:11:24.963  	 Malloc2p7           :       5.14     846.32       3.31       0.00     0.00  148824.29    1750.11  116296.61
00:11:24.963  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x1000
00:11:24.963  	 TestPT              :       5.18     844.73       3.30       0.00     0.00  148314.96    7745.16  117249.86
00:11:24.963  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x1000 length 0x1000
00:11:24.963  	 TestPT              :       5.17     842.04       3.29       0.00     0.00  149318.78   22520.55  117249.86
00:11:24.963  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x2000
00:11:24.963  	 raid0               :       5.18     864.18       3.38       0.00     0.00  145341.56    2129.92  108670.60
00:11:24.963  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x2000 length 0x2000
00:11:24.963  	 raid0               :       5.15     845.57       3.30       0.00     0.00  148443.15    2100.13  107240.73
00:11:24.963  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x2000
00:11:24.963  	 concat0             :       5.19     863.94       3.37       0.00     0.00  145176.94    2055.45  107240.73
00:11:24.963  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x2000 length 0x2000
00:11:24.963  	 concat0             :       5.15     845.30       3.30       0.00     0.00  148273.60    2174.60  105334.23
00:11:24.963  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x1000
00:11:24.963  	 raid1               :       5.19     863.69       3.37       0.00     0.00  144988.58    2546.97  104380.97
00:11:24.963  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x1000 length 0x1000
00:11:24.963  	 raid1               :       5.15     845.01       3.30       0.00     0.00  148081.35    2234.18  102474.47
00:11:24.963  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x0 length 0x4e2
00:11:24.963  	 AIO0                :       5.19     863.30       3.37       0.00     0.00  144520.85    4766.25  112960.23
00:11:24.963  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:11:24.963  	 Verification LBA range: start 0x4e2 length 0x4e2
00:11:24.963  	 AIO0                :       5.17     865.86       3.38       0.00     0.00  144014.90     912.29  114866.73
00:11:24.963  
[2024-11-20T05:01:38.920Z]  ===================================================================================================================
00:11:24.963  
[2024-11-20T05:01:38.920Z]  Total                       :              28970.51     113.17       0.00     0.00  139700.11     422.63  287881.77
00:11:25.222  
00:11:25.222  real	0m6.397s
00:11:25.222  user	0m10.895s
00:11:25.222  sys	0m0.526s
00:11:25.222   05:01:39 blockdev_general.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:25.222  ************************************
00:11:25.222  END TEST bdev_verify
00:11:25.222  ************************************
00:11:25.222   05:01:39 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:11:25.222   05:01:39 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:11:25.222   05:01:39 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:11:25.222   05:01:39 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:25.222   05:01:39 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:25.222  ************************************
00:11:25.222  START TEST bdev_verify_big_io
00:11:25.222  ************************************
00:11:25.222   05:01:39 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:11:25.480  [2024-11-20 05:01:39.203758] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:25.480  [2024-11-20 05:01:39.204044] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128484 ]
00:11:25.480  [2024-11-20 05:01:39.361631] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:25.480  [2024-11-20 05:01:39.382681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:11:25.480  [2024-11-20 05:01:39.413651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:25.480  [2024-11-20 05:01:39.413650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:25.737  [2024-11-20 05:01:39.552272] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:25.737  [2024-11-20 05:01:39.552415] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:25.737  [2024-11-20 05:01:39.560203] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:25.737  [2024-11-20 05:01:39.560279] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:25.737  [2024-11-20 05:01:39.568274] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:25.737  [2024-11-20 05:01:39.568361] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:25.737  [2024-11-20 05:01:39.568405] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:25.737  [2024-11-20 05:01:39.659025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:25.737  [2024-11-20 05:01:39.659158] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:25.737  [2024-11-20 05:01:39.659218] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:11:25.737  [2024-11-20 05:01:39.659251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:25.737  [2024-11-20 05:01:39.661970] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:25.737  [2024-11-20 05:01:39.662044] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:25.996  [2024-11-20 05:01:39.842971] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.844617] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.845736] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.847277] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.848863] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.849912] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.851542] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.852638] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.854261] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.855317] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.856849] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.857942] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.859516] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.861021] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:11:25.996  [2024-11-20 05:01:39.862079] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:11:25.997  [2024-11-20 05:01:39.863715] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:11:25.997  [2024-11-20 05:01:39.886650] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:11:25.997  [2024-11-20 05:01:39.888766] bdevperf.c:1946:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:11:25.997  Running I/O for 5 seconds...
00:11:32.600       3550.00 IOPS,   221.88 MiB/s
00:11:32.600                                                                                                  Latency(us)
00:11:32.600  
[2024-11-20T05:01:46.557Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:32.600  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x100
00:11:32.600  	 Malloc0             :       5.63     272.91      17.06       0.00     0.00  461848.22     696.32 1349803.29
00:11:32.600  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x100 length 0x100
00:11:32.600  	 Malloc0             :       5.72     268.30      16.77       0.00     0.00  470770.18     685.15 1471819.40
00:11:32.600  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x80
00:11:32.600  	 Malloc1p0           :       6.23      48.76       3.05       0.00     0.00 2429238.30    1176.67 3935019.75
00:11:32.600  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x80 length 0x80
00:11:32.600  	 Malloc1p0           :       5.88     144.20       9.01       0.00     0.00  842422.77    2040.55 1723477.64
00:11:32.600  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x80
00:11:32.600  	 Malloc1p1           :       6.24      48.75       3.05       0.00     0.00 2369312.45    1280.93 3797751.62
00:11:32.600  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x80 length 0x80
00:11:32.600  	 Malloc1p1           :       6.12      52.30       3.27       0.00     0.00 2244745.44    1623.51 3599475.43
00:11:32.600  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p0           :       5.89      38.06       2.38       0.00     0.00  770011.74     547.37 1441315.37
00:11:32.600  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p0           :       5.81      38.55       2.41       0.00     0.00  765518.85     573.44 1258291.20
00:11:32.600  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p1           :       5.89      38.06       2.38       0.00     0.00  765198.87     539.93 1426063.36
00:11:32.600  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p1           :       5.81      38.54       2.41       0.00     0.00  761280.96     536.20 1250665.19
00:11:32.600  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p2           :       5.89      38.05       2.38       0.00     0.00  760344.39     536.20 1410811.35
00:11:32.600  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p2           :       5.81      38.53       2.41       0.00     0.00  757185.09     536.20 1235413.18
00:11:32.600  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p3           :       5.89      38.04       2.38       0.00     0.00  755505.34     610.68 1395559.33
00:11:32.600  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p3           :       5.81      38.52       2.41       0.00     0.00  753212.83     536.20 1212535.16
00:11:32.600  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p4           :       5.89      38.03       2.38       0.00     0.00  750658.45     573.44 1372681.31
00:11:32.600  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p4           :       5.82      38.51       2.41       0.00     0.00  748988.05     539.93 1197283.14
00:11:32.600  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p5           :       5.89      38.03       2.38       0.00     0.00  746109.80     528.76 1357429.29
00:11:32.600  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p5           :       5.82      38.51       2.41       0.00     0.00  744862.51     610.68 1182031.13
00:11:32.600  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p6           :       5.89      38.02       2.38       0.00     0.00  740899.13     536.20 1334551.27
00:11:32.600  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p6           :       5.82      38.50       2.41       0.00     0.00  740394.81     551.10 1166779.11
00:11:32.600  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x20
00:11:32.600  	 Malloc2p7           :       5.89      38.01       2.38       0.00     0.00  736004.18     536.20 1311673.25
00:11:32.600  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x20 length 0x20
00:11:32.600  	 Malloc2p7           :       5.82      38.49       2.41       0.00     0.00  736364.61     536.20 1151527.10
00:11:32.600  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x100
00:11:32.600  	 TestPT              :       6.32      50.65       3.17       0.00     0.00 2116248.96    1131.99 3538467.37
00:11:32.600  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x100 length 0x100
00:11:32.600  	 TestPT              :       6.12      47.39       2.96       0.00     0.00 2311084.51   81502.95 3126662.98
00:11:32.600  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x200
00:11:32.600  	 raid0               :       6.28      56.08       3.51       0.00     0.00 1907053.77    1310.72 3401199.24
00:11:32.600  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x200 length 0x200
00:11:32.600  	 raid0               :       6.04      58.28       3.64       0.00     0.00 1867708.86    1236.25 3263931.11
00:11:32.600  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x200
00:11:32.600  	 concat0             :       6.22      65.15       4.07       0.00     0.00 1615219.67    1407.53 3294435.14
00:11:32.600  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x200 length 0x200
00:11:32.600  	 concat0             :       6.12      60.12       3.76       0.00     0.00 1751201.95    1325.61 3157167.01
00:11:32.600  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x100
00:11:32.600  	 raid1               :       6.24      70.52       4.41       0.00     0.00 1463609.17    1578.82 3187671.04
00:11:32.600  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x100 length 0x100
00:11:32.600  	 raid1               :       6.16      76.89       4.81       0.00     0.00 1369427.06    1571.37 3035150.89
00:11:32.600  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x0 length 0x4e
00:11:32.600  	 AIO0                :       6.28      88.57       5.54       0.00     0.00  696836.17    1854.37 1937005.85
00:11:32.600  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:11:32.600  	 Verification LBA range: start 0x4e length 0x4e
00:11:32.600  	 AIO0                :       6.21      76.70       4.79       0.00     0.00  823440.62     785.69 1799737.72
00:11:32.600  
[2024-11-20T05:01:46.557Z]  ===================================================================================================================
00:11:32.600  
[2024-11-20T05:01:46.557Z]  Total                       :               2098.02     131.13       0.00     0.00 1048253.13     528.76 3935019.75
00:11:32.859  
00:11:32.859  real	0m7.523s
00:11:32.859  user	0m13.864s
00:11:32.859  sys	0m0.433s
00:11:32.859   05:01:46 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:32.859   05:01:46 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:11:32.859  ************************************
00:11:32.859  END TEST bdev_verify_big_io
00:11:32.859  ************************************
00:11:32.859   05:01:46 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:32.859   05:01:46 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:32.859   05:01:46 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:32.859   05:01:46 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:32.859  ************************************
00:11:32.860  START TEST bdev_write_zeroes
00:11:32.860  ************************************
00:11:32.860   05:01:46 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:32.860  [2024-11-20 05:01:46.777620] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:32.860  [2024-11-20 05:01:46.777912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128605 ]
00:11:33.119  [2024-11-20 05:01:46.926668] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:33.119  [2024-11-20 05:01:46.949302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:33.119  [2024-11-20 05:01:46.979257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:33.378  [2024-11-20 05:01:47.118394] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:33.378  [2024-11-20 05:01:47.118507] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:11:33.378  [2024-11-20 05:01:47.126334] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:33.378  [2024-11-20 05:01:47.126402] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:11:33.378  [2024-11-20 05:01:47.134388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:33.378  [2024-11-20 05:01:47.134447] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:11:33.378  [2024-11-20 05:01:47.134477] vbdev_passthru.c: 736:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:11:33.378  [2024-11-20 05:01:47.226718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:11:33.378  [2024-11-20 05:01:47.226818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened
00:11:33.378  [2024-11-20 05:01:47.226856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80
00:11:33.378  [2024-11-20 05:01:47.226877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed
00:11:33.378  [2024-11-20 05:01:47.229240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:11:33.378  [2024-11-20 05:01:47.229313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:11:33.637  Running I/O for 1 seconds...
00:11:34.832     100346.00 IOPS,   391.98 MiB/s
00:11:34.832                                                                                                  Latency(us)
00:11:34.832  
[2024-11-20T05:01:48.789Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:11:34.832  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc0             :       1.06    6182.18      24.15       0.00     0.00   20692.78     647.91   40989.79
00:11:34.832  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc1p0           :       1.06    6174.53      24.12       0.00     0.00   20688.56     781.96   40036.54
00:11:34.832  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc1p1           :       1.06    6167.08      24.09       0.00     0.00   20679.06     781.96   38844.97
00:11:34.832  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p0           :       1.06    6160.55      24.06       0.00     0.00   20659.98     767.07   37653.41
00:11:34.832  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p1           :       1.06    6153.47      24.04       0.00     0.00   20650.44     770.79   36700.16
00:11:34.832  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p2           :       1.06    6145.97      24.01       0.00     0.00   20639.71     763.35   35508.60
00:11:34.832  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p3           :       1.06    6138.51      23.98       0.00     0.00   20631.17     781.96   34555.35
00:11:34.832  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p4           :       1.06    6130.88      23.95       0.00     0.00   20621.67     774.52   33363.78
00:11:34.832  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p5           :       1.07    6124.40      23.92       0.00     0.00   20610.12     767.07   32648.84
00:11:34.832  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p6           :       1.07    6116.68      23.89       0.00     0.00   20595.58     767.07   33363.78
00:11:34.832  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 Malloc2p7           :       1.07    6110.11      23.87       0.00     0.00   20588.24     837.82   34793.66
00:11:34.832  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 TestPT              :       1.07    6103.95      23.84       0.00     0.00   20568.72     901.12   35985.22
00:11:34.832  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 raid0               :       1.07    6096.23      23.81       0.00     0.00   20544.42    1541.59   37415.10
00:11:34.832  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 concat0             :       1.07    6089.36      23.79       0.00     0.00   20501.23    1541.59   38844.97
00:11:34.832  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 raid1               :       1.07    6080.21      23.75       0.00     0.00   20452.37    2442.71   40513.16
00:11:34.832  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:11:34.832  	 AIO0                :       1.07    6068.55      23.71       0.00     0.00   20390.19    1541.59   42181.35
00:11:34.832  
[2024-11-20T05:01:48.789Z]  ===================================================================================================================
00:11:34.832  
[2024-11-20T05:01:48.789Z]  Total                       :              98042.66     382.98       0.00     0.00   20594.65     647.91   42181.35
00:11:35.091  
00:11:35.091  real	0m2.211s
00:11:35.091  user	0m1.652s
00:11:35.091  sys	0m0.340s
00:11:35.091   05:01:48 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:35.091  ************************************
00:11:35.091  END TEST bdev_write_zeroes
00:11:35.091   05:01:48 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:11:35.091  ************************************
00:11:35.091   05:01:48 blockdev_general -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:35.091   05:01:48 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:35.091   05:01:48 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:35.091   05:01:48 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:35.091  ************************************
00:11:35.091  START TEST bdev_json_nonenclosed
00:11:35.091  ************************************
00:11:35.091   05:01:48 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:35.091  [2024-11-20 05:01:49.042865] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:35.091  [2024-11-20 05:01:49.043150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128659 ]
00:11:35.359  [2024-11-20 05:01:49.191926] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:35.359  [2024-11-20 05:01:49.219126] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:35.359  [2024-11-20 05:01:49.248480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:35.359  [2024-11-20 05:01:49.248588] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:11:35.359  [2024-11-20 05:01:49.248631] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:11:35.359  [2024-11-20 05:01:49.248662] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:35.618  
00:11:35.618  real	0m0.344s
00:11:35.618  user	0m0.116s
00:11:35.618  sys	0m0.128s
00:11:35.618   05:01:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:35.618   05:01:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:11:35.618  ************************************
00:11:35.618  END TEST bdev_json_nonenclosed
00:11:35.618  ************************************
00:11:35.618   05:01:49 blockdev_general -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:35.618   05:01:49 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:11:35.618   05:01:49 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:35.618   05:01:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:35.618  ************************************
00:11:35.618  START TEST bdev_json_nonarray
00:11:35.618  ************************************
00:11:35.618   05:01:49 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:11:35.618  [2024-11-20 05:01:49.429180] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:35.618  [2024-11-20 05:01:49.429376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128681 ]
00:11:35.618  [2024-11-20 05:01:49.562976] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:35.877  [2024-11-20 05:01:49.588945] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:35.877  [2024-11-20 05:01:49.618665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:11:35.877  [2024-11-20 05:01:49.618790] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:11:35.877  [2024-11-20 05:01:49.618837] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:11:35.877  [2024-11-20 05:01:49.618872] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:11:35.877  
00:11:35.877  real	0m0.313s
00:11:35.877  user	0m0.124s
00:11:35.877  sys	0m0.089s
00:11:35.877   05:01:49 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:35.877   05:01:49 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:11:35.877  ************************************
00:11:35.877  END TEST bdev_json_nonarray
00:11:35.878  ************************************
00:11:35.878   05:01:49 blockdev_general -- bdev/blockdev.sh@786 -- # [[ bdev == bdev ]]
00:11:35.878   05:01:49 blockdev_general -- bdev/blockdev.sh@787 -- # run_test bdev_qos qos_test_suite ''
00:11:35.878   05:01:49 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:11:35.878   05:01:49 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:35.878   05:01:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:11:35.878  ************************************
00:11:35.878  START TEST bdev_qos
00:11:35.878  ************************************
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@1129 -- # qos_test_suite ''
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=128712
00:11:35.878  Process qos testing pid: 128712
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 128712'
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 128712
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # '[' -z 128712 ']'
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # local max_retries=100
00:11:35.878  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@844 -- # xtrace_disable
00:11:35.878   05:01:49 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:35.878  [2024-11-20 05:01:49.820746] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:11:35.878  [2024-11-20 05:01:49.821524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid128712 ]
00:11:36.137  [2024-11-20 05:01:49.972947] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:11:36.137  [2024-11-20 05:01:50.006356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:11:36.137  [2024-11-20 05:01:50.052067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@868 -- # return 0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075  Malloc_0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075  [
00:11:37.075  {
00:11:37.075  "name": "Malloc_0",
00:11:37.075  "aliases": [
00:11:37.075  "eb81ab0b-f953-4cee-8d81-ecf494053a1f"
00:11:37.075  ],
00:11:37.075  "product_name": "Malloc disk",
00:11:37.075  "block_size": 512,
00:11:37.075  "num_blocks": 262144,
00:11:37.075  "uuid": "eb81ab0b-f953-4cee-8d81-ecf494053a1f",
00:11:37.075  "assigned_rate_limits": {
00:11:37.075  "rw_ios_per_sec": 0,
00:11:37.075  "rw_mbytes_per_sec": 0,
00:11:37.075  "r_mbytes_per_sec": 0,
00:11:37.075  "w_mbytes_per_sec": 0
00:11:37.075  },
00:11:37.075  "claimed": false,
00:11:37.075  "zoned": false,
00:11:37.075  "supported_io_types": {
00:11:37.075  "read": true,
00:11:37.075  "write": true,
00:11:37.075  "unmap": true,
00:11:37.075  "flush": true,
00:11:37.075  "reset": true,
00:11:37.075  "nvme_admin": false,
00:11:37.075  "nvme_io": false,
00:11:37.075  "nvme_io_md": false,
00:11:37.075  "write_zeroes": true,
00:11:37.075  "zcopy": true,
00:11:37.075  "get_zone_info": false,
00:11:37.075  "zone_management": false,
00:11:37.075  "zone_append": false,
00:11:37.075  "compare": false,
00:11:37.075  "compare_and_write": false,
00:11:37.075  "abort": true,
00:11:37.075  "seek_hole": false,
00:11:37.075  "seek_data": false,
00:11:37.075  "copy": true,
00:11:37.075  "nvme_iov_md": false
00:11:37.075  },
00:11:37.075  "memory_domains": [
00:11:37.075  {
00:11:37.075  "dma_device_id": "system",
00:11:37.075  "dma_device_type": 1
00:11:37.075  },
00:11:37.075  {
00:11:37.075  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:11:37.075  "dma_device_type": 2
00:11:37.075  }
00:11:37.075  ],
00:11:37.075  "driver_specific": {}
00:11:37.075  }
00:11:37.075  ]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075  Null_1
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Null_1
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:37.075  [
00:11:37.075  {
00:11:37.075  "name": "Null_1",
00:11:37.075  "aliases": [
00:11:37.075  "b01ffd0d-90ba-4b93-af93-07a3bfdc5ee4"
00:11:37.075  ],
00:11:37.075  "product_name": "Null disk",
00:11:37.075  "block_size": 512,
00:11:37.075  "num_blocks": 262144,
00:11:37.075  "uuid": "b01ffd0d-90ba-4b93-af93-07a3bfdc5ee4",
00:11:37.075  "assigned_rate_limits": {
00:11:37.075  "rw_ios_per_sec": 0,
00:11:37.075  "rw_mbytes_per_sec": 0,
00:11:37.075  "r_mbytes_per_sec": 0,
00:11:37.075  "w_mbytes_per_sec": 0
00:11:37.075  },
00:11:37.075  "claimed": false,
00:11:37.075  "zoned": false,
00:11:37.075  "supported_io_types": {
00:11:37.075  "read": true,
00:11:37.075  "write": true,
00:11:37.075  "unmap": false,
00:11:37.075  "flush": false,
00:11:37.075  "reset": true,
00:11:37.075  "nvme_admin": false,
00:11:37.075  "nvme_io": false,
00:11:37.075  "nvme_io_md": false,
00:11:37.075  "write_zeroes": true,
00:11:37.075  "zcopy": false,
00:11:37.075  "get_zone_info": false,
00:11:37.075  "zone_management": false,
00:11:37.075  "zone_append": false,
00:11:37.075  "compare": false,
00:11:37.075  "compare_and_write": false,
00:11:37.075  "abort": true,
00:11:37.075  "seek_hole": false,
00:11:37.075  "seek_data": false,
00:11:37.075  "copy": false,
00:11:37.075  "nvme_iov_md": false
00:11:37.075  },
00:11:37.075  "driver_specific": {}
00:11:37.075  }
00:11:37.075  ]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0
00:11:37.075   05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0
00:11:37.075    05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0
00:11:37.075    05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:11:37.075    05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:37.075    05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:37.075     05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:37.075     05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:37.075     05:01:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:11:37.334  Running I/O for 60 seconds...
00:11:39.207     161792.00 IOPS,   632.00 MiB/s
[2024-11-20T05:01:54.099Z]    162048.00 IOPS,   633.00 MiB/s
[2024-11-20T05:01:55.477Z]    161962.67 IOPS,   632.67 MiB/s
[2024-11-20T05:01:56.412Z]    161920.00 IOPS,   632.50 MiB/s
[2024-11-20T05:01:56.412Z]    161792.00 IOPS,   632.00 MiB/s
[2024-11-20T05:01:56.412Z]   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  80745.89  322983.57  0.00       0.00       327680.00  0.00     0.00   '
00:11:42.455    05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:11:42.455     05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:11:42.455    05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=80745.89
00:11:42.455    05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 80745
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=80745
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=20000
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 20000 -gt 1000 ']'
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc_0
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 20000 IOPS Malloc_0
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:42.455   05:01:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:42.455  ************************************
00:11:42.455  START TEST bdev_qos_iops
00:11:42.455  ************************************
00:11:42.455   05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1129 -- # run_qos_test 20000 IOPS Malloc_0
00:11:42.455   05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=20000
00:11:42.455   05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:42.455    05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0
00:11:42.455    05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:11:42.455    05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:42.455    05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:42.455     05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:42.455     05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:42.455     05:01:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1
00:11:44.326     144791.33 IOPS,   565.59 MiB/s
[2024-11-20T05:01:59.217Z]    131437.14 IOPS,   513.43 MiB/s
[2024-11-20T05:02:00.152Z]    121429.00 IOPS,   474.33 MiB/s
[2024-11-20T05:02:01.087Z]    113642.67 IOPS,   443.92 MiB/s
[2024-11-20T05:02:01.654Z]    107401.60 IOPS,   419.54 MiB/s
[2024-11-20T05:02:01.654Z]   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  19991.16  79964.63   0.00       0.00       80880.00   0.00     0.00   '
00:11:47.697    05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:11:47.697     05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:11:47.697    05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=19991.16
00:11:47.697    05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 19991
00:11:47.697  ************************************
00:11:47.697  END TEST bdev_qos_iops
00:11:47.697  ************************************
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=19991
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']'
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=18000
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=22000
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 19991 -lt 18000 ']'
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 19991 -gt 22000 ']'
00:11:47.697  
00:11:47.697  real	0m5.211s
00:11:47.697  user	0m0.119s
00:11:47.697  sys	0m0.030s
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:47.697   05:02:01 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x
00:11:47.697    05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1
00:11:47.697    05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:47.697    05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:11:47.697    05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:47.697     05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:47.697     05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1
00:11:47.697     05:02:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:11:49.199     102207.64 IOPS,   399.25 MiB/s
[2024-11-20T05:02:04.091Z]     97940.00 IOPS,   382.58 MiB/s
[2024-11-20T05:02:05.483Z]     94372.92 IOPS,   368.64 MiB/s
[2024-11-20T05:02:06.420Z]     91316.86 IOPS,   356.71 MiB/s
[2024-11-20T05:02:06.680Z]     88649.87 IOPS,   346.29 MiB/s
[2024-11-20T05:02:06.680Z]   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    31057.38  124229.53  0.00       0.00       125952.00  0.00     0.00   '
00:11:52.723    05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:11:52.723    05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:52.723     05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:11:52.723    05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=125952.00
00:11:52.723    05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 125952
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=125952
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=12
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 12 -lt 2 ']'
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 12 Null_1
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 12 BANDWIDTH Null_1
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:52.723   05:02:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:52.723  ************************************
00:11:52.723  START TEST bdev_qos_bw
00:11:52.723  ************************************
00:11:52.723   05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1129 -- # run_qos_test 12 BANDWIDTH Null_1
00:11:52.723   05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=12
00:11:52.723   05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:52.723    05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1
00:11:52.723    05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:52.723    05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:11:52.723    05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:52.723     05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:52.723     05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1
00:11:52.723     05:02:06 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1
00:11:54.228      85487.69 IOPS,   333.94 MiB/s
[2024-11-20T05:02:09.121Z]     81827.24 IOPS,   319.64 MiB/s
[2024-11-20T05:02:10.511Z]     78569.28 IOPS,   306.91 MiB/s
[2024-11-20T05:02:11.449Z]     75653.21 IOPS,   295.52 MiB/s
[2024-11-20T05:02:12.017Z]     73030.05 IOPS,   285.27 MiB/s
[2024-11-20T05:02:12.017Z]   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    3072.09   12288.37   0.00       0.00       12472.00  0.00     0.00   '
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:58.060     05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=12472.00
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 12472
00:11:58.060  ************************************
00:11:58.060  END TEST bdev_qos_bw
00:11:58.060  ************************************
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=12472
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=12288
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=11059
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=13516
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12472 -lt 11059 ']'
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 12472 -gt 13516 ']'
00:11:58.060  
00:11:58.060  real	0m5.220s
00:11:58.060  user	0m0.111s
00:11:58.060  sys	0m0.036s
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:11:58.060   05:02:11 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:11:58.060  ************************************
00:11:58.060  START TEST bdev_qos_ro_bw
00:11:58.060  ************************************
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1129 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2
00:11:58.060   05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:11:58.060    05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:11:58.060     05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:11:58.060     05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:11:58.060     05:02:11 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1
00:11:59.255      70487.90 IOPS,   275.34 MiB/s
[2024-11-20T05:02:14.148Z]     67446.91 IOPS,   263.46 MiB/s
[2024-11-20T05:02:15.525Z]     64670.17 IOPS,   252.62 MiB/s
[2024-11-20T05:02:16.460Z]     62125.00 IOPS,   242.68 MiB/s
[2024-11-20T05:02:17.394Z]     59783.36 IOPS,   233.53 MiB/s
[2024-11-20T05:02:17.394Z]     57621.85 IOPS,   225.09 MiB/s
[2024-11-20T05:02:17.394Z]   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  512.84   2051.36    0.00       0.00       2068.00   0.00     0.00   '
00:12:03.437    05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:12:03.437    05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:03.437     05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:12:03.437    05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2068.00
00:12:03.437    05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2068
00:12:03.437  ************************************
00:12:03.437  END TEST bdev_qos_ro_bw
00:12:03.437  ************************************
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2068
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2068 -lt 1843 ']'
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2068 -gt 2252 ']'
00:12:03.437  
00:12:03.437  real	0m5.170s
00:12:03.437  user	0m0.106s
00:12:03.437  sys	0m0.038s
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:03.437   05:02:17 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x
00:12:03.437   05:02:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:12:03.437   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:03.437   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:12:04.005   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:04.005   05:02:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1
00:12:04.005   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:04.005   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:12:04.005  
00:12:04.005                                                                                                  Latency(us)
00:12:04.005  
[2024-11-20T05:02:17.962Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:04.005  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:04.005  	 Malloc_0            :      26.64   27414.81     107.09       0.00     0.00    9251.47    2055.45  507129.48
00:12:04.005  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:04.005  	 Null_1              :      26.75   28801.28     112.51       0.00     0.00    8866.55     707.49  107240.73
00:12:04.005  
[2024-11-20T05:02:17.962Z]  ===================================================================================================================
00:12:04.005  
[2024-11-20T05:02:17.962Z]  Total                       :              56216.09     219.59       0.00     0.00    9053.87     707.49  507129.48
00:12:04.005  {
00:12:04.005    "results": [
00:12:04.005      {
00:12:04.005        "job": "Malloc_0",
00:12:04.005        "core_mask": "0x2",
00:12:04.005        "workload": "randread",
00:12:04.005        "status": "finished",
00:12:04.005        "queue_depth": 256,
00:12:04.005        "io_size": 4096,
00:12:04.005        "runtime": 26.636736,
00:12:04.005        "iops": 27414.807880364922,
00:12:04.005        "mibps": 107.08909328267548,
00:12:04.005        "io_failed": 0,
00:12:04.005        "io_timeout": 0,
00:12:04.005        "avg_latency_us": 9251.470193479088,
00:12:04.005        "min_latency_us": 2055.447272727273,
00:12:04.005        "max_latency_us": 507129.48363636364
00:12:04.005      },
00:12:04.005      {
00:12:04.005        "job": "Null_1",
00:12:04.005        "core_mask": "0x2",
00:12:04.005        "workload": "randread",
00:12:04.005        "status": "finished",
00:12:04.005        "queue_depth": 256,
00:12:04.005        "io_size": 4096,
00:12:04.005        "runtime": 26.745058,
00:12:04.005        "iops": 28801.283586672347,
00:12:04.006        "mibps": 112.50501401043886,
00:12:04.006        "io_failed": 0,
00:12:04.006        "io_timeout": 0,
00:12:04.006        "avg_latency_us": 8866.546768578433,
00:12:04.006        "min_latency_us": 707.4909090909091,
00:12:04.006        "max_latency_us": 107240.72727272728
00:12:04.006      }
00:12:04.006    ],
00:12:04.006    "core_count": 1
00:12:04.006  }
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 128712
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' -z 128712 ']'
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # kill -0 128712
00:12:04.006    05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # uname
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:04.006    05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 128712
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 128712'
00:12:04.006  killing process with pid 128712
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # kill 128712
00:12:04.006  Received shutdown signal, test time was about 26.782139 seconds
00:12:04.006  
00:12:04.006                                                                                                  Latency(us)
00:12:04.006  
[2024-11-20T05:02:17.963Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:04.006  
[2024-11-20T05:02:17.963Z]  ===================================================================================================================
00:12:04.006  
[2024-11-20T05:02:17.963Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:04.006   05:02:17 blockdev_general.bdev_qos -- common/autotest_common.sh@978 -- # wait 128712
00:12:04.265   05:02:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT
00:12:04.265  
00:12:04.265  real	0m28.303s
00:12:04.265  user	0m29.187s
00:12:04.265  sys	0m0.592s
00:12:04.265   05:02:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:04.265   05:02:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:12:04.265  ************************************
00:12:04.265  END TEST bdev_qos
00:12:04.265  ************************************
00:12:04.265   05:02:18 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:12:04.265   05:02:18 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:04.265   05:02:18 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:04.265   05:02:18 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:04.265  ************************************
00:12:04.265  START TEST bdev_qd_sampling
00:12:04.265  ************************************
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1129 -- # qd_sampling_test_suite ''
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=129178
00:12:04.265  Process bdev QD sampling period testing pid: 129178
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 129178'
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 129178
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # '[' -z 129178 ']'
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:04.265  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:04.265   05:02:18 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:04.265  [2024-11-20 05:02:18.175568] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:04.265  [2024-11-20 05:02:18.175857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129178 ]
00:12:04.524  [2024-11-20 05:02:18.334362] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:04.524  [2024-11-20 05:02:18.365235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:04.524  [2024-11-20 05:02:18.409894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:04.524  [2024-11-20 05:02:18.409902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@868 -- # return 0
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:05.459  Malloc_QD
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_QD
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # local i
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:05.459  [
00:12:05.459  {
00:12:05.459  "name": "Malloc_QD",
00:12:05.459  "aliases": [
00:12:05.459  "7df4355e-0ee7-4e14-875a-a6c4a47401c4"
00:12:05.459  ],
00:12:05.459  "product_name": "Malloc disk",
00:12:05.459  "block_size": 512,
00:12:05.459  "num_blocks": 262144,
00:12:05.459  "uuid": "7df4355e-0ee7-4e14-875a-a6c4a47401c4",
00:12:05.459  "assigned_rate_limits": {
00:12:05.459  "rw_ios_per_sec": 0,
00:12:05.459  "rw_mbytes_per_sec": 0,
00:12:05.459  "r_mbytes_per_sec": 0,
00:12:05.459  "w_mbytes_per_sec": 0
00:12:05.459  },
00:12:05.459  "claimed": false,
00:12:05.459  "zoned": false,
00:12:05.459  "supported_io_types": {
00:12:05.459  "read": true,
00:12:05.459  "write": true,
00:12:05.459  "unmap": true,
00:12:05.459  "flush": true,
00:12:05.459  "reset": true,
00:12:05.459  "nvme_admin": false,
00:12:05.459  "nvme_io": false,
00:12:05.459  "nvme_io_md": false,
00:12:05.459  "write_zeroes": true,
00:12:05.459  "zcopy": true,
00:12:05.459  "get_zone_info": false,
00:12:05.459  "zone_management": false,
00:12:05.459  "zone_append": false,
00:12:05.459  "compare": false,
00:12:05.459  "compare_and_write": false,
00:12:05.459  "abort": true,
00:12:05.459  "seek_hole": false,
00:12:05.459  "seek_data": false,
00:12:05.459  "copy": true,
00:12:05.459  "nvme_iov_md": false
00:12:05.459  },
00:12:05.459  "memory_domains": [
00:12:05.459  {
00:12:05.459  "dma_device_id": "system",
00:12:05.459  "dma_device_type": 1
00:12:05.459  },
00:12:05.459  {
00:12:05.459  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:05.459  "dma_device_type": 2
00:12:05.459  }
00:12:05.459  ],
00:12:05.459  "driver_specific": {}
00:12:05.459  }
00:12:05.459  ]
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@911 -- # return 0
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2
00:12:05.459   05:02:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:05.459  Running I/O for 5 seconds...
00:12:07.330     126464.00 IOPS,   494.00 MiB/s
[2024-11-20T05:02:21.287Z]  05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:07.330    05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:12:07.330    05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:07.330    05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:07.330    05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:07.330   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{
00:12:07.330  "tick_rate": 2200000000,
00:12:07.330  "ticks": 1295841338961,
00:12:07.330  "bdevs": [
00:12:07.330  {
00:12:07.330  "name": "Malloc_QD",
00:12:07.330  "bytes_read": 990941696,
00:12:07.330  "num_read_ops": 241923,
00:12:07.330  "bytes_written": 0,
00:12:07.330  "num_write_ops": 0,
00:12:07.330  "bytes_unmapped": 0,
00:12:07.330  "num_unmap_ops": 0,
00:12:07.330  "bytes_copied": 0,
00:12:07.330  "num_copy_ops": 0,
00:12:07.331  "read_latency_ticks": 2145869869039,
00:12:07.331  "max_read_latency_ticks": 13449004,
00:12:07.331  "min_read_latency_ticks": 477443,
00:12:07.331  "write_latency_ticks": 0,
00:12:07.331  "max_write_latency_ticks": 0,
00:12:07.331  "min_write_latency_ticks": 0,
00:12:07.331  "unmap_latency_ticks": 0,
00:12:07.331  "max_unmap_latency_ticks": 0,
00:12:07.331  "min_unmap_latency_ticks": 0,
00:12:07.331  "copy_latency_ticks": 0,
00:12:07.331  "max_copy_latency_ticks": 0,
00:12:07.331  "min_copy_latency_ticks": 0,
00:12:07.331  "io_error": {},
00:12:07.331  "queue_depth_polling_period": 10,
00:12:07.331  "queue_depth": 512,
00:12:07.331  "io_time": 20,
00:12:07.331  "weighted_io_time": 10240
00:12:07.331  }
00:12:07.331  ]
00:12:07.331  }'
00:12:07.331    05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']'
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']'
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:07.590  
00:12:07.590                                                                                                  Latency(us)
00:12:07.590  
[2024-11-20T05:02:21.547Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:07.590  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:12:07.590  	 Malloc_QD           :       1.98   62235.54     243.11       0.00     0.00    4103.20    1027.72    6136.55
00:12:07.590  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:07.590  	 Malloc_QD           :       1.98   64230.53     250.90       0.00     0.00    3976.18     752.17    5928.03
00:12:07.590  
[2024-11-20T05:02:21.547Z]  ===================================================================================================================
00:12:07.590  
[2024-11-20T05:02:21.547Z]  Total                       :             126466.06     494.01       0.00     0.00    4038.65     752.17    6136.55
00:12:07.590  {
00:12:07.590    "results": [
00:12:07.590      {
00:12:07.590        "job": "Malloc_QD",
00:12:07.590        "core_mask": "0x1",
00:12:07.590        "workload": "randread",
00:12:07.590        "status": "finished",
00:12:07.590        "queue_depth": 256,
00:12:07.590        "io_size": 4096,
00:12:07.590        "runtime": 1.978548,
00:12:07.590        "iops": 62235.53838471445,
00:12:07.590        "mibps": 243.1075718152908,
00:12:07.590        "io_failed": 0,
00:12:07.590        "io_timeout": 0,
00:12:07.590        "avg_latency_us": 4103.204929124929,
00:12:07.590        "min_latency_us": 1027.7236363636364,
00:12:07.590        "max_latency_us": 6136.552727272728
00:12:07.590      },
00:12:07.590      {
00:12:07.590        "job": "Malloc_QD",
00:12:07.590        "core_mask": "0x2",
00:12:07.590        "workload": "randread",
00:12:07.590        "status": "finished",
00:12:07.590        "queue_depth": 256,
00:12:07.590        "io_size": 4096,
00:12:07.590        "runtime": 1.980865,
00:12:07.590        "iops": 64230.52555323053,
00:12:07.590        "mibps": 250.90049044230676,
00:12:07.590        "io_failed": 0,
00:12:07.590        "io_timeout": 0,
00:12:07.590        "avg_latency_us": 3976.177516005121,
00:12:07.590        "min_latency_us": 752.1745454545454,
00:12:07.590        "max_latency_us": 5928.029090909091
00:12:07.590      }
00:12:07.590    ],
00:12:07.590    "core_count": 2
00:12:07.590  }
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 129178
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' -z 129178 ']'
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # kill -0 129178
00:12:07.590    05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # uname
00:12:07.590   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:07.591    05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129178
00:12:07.591   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:07.591   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:07.591  killing process with pid 129178
00:12:07.591   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129178'
00:12:07.591   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # kill 129178
00:12:07.591  Received shutdown signal, test time was about 2.036433 seconds
00:12:07.591  
00:12:07.591                                                                                                  Latency(us)
00:12:07.591  
[2024-11-20T05:02:21.548Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:07.591  
[2024-11-20T05:02:21.548Z]  ===================================================================================================================
00:12:07.591  
[2024-11-20T05:02:21.548Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:07.591   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@978 -- # wait 129178
00:12:07.850   05:02:21 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT
00:12:07.850  
00:12:07.850  real	0m3.497s
00:12:07.850  user	0m6.852s
00:12:07.850  sys	0m0.372s
00:12:07.850  ************************************
00:12:07.850  END TEST bdev_qd_sampling
00:12:07.850  ************************************
00:12:07.850   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:07.850   05:02:21 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:12:07.850   05:02:21 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_error error_test_suite ''
00:12:07.850   05:02:21 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:07.850   05:02:21 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:07.850   05:02:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:07.850  ************************************
00:12:07.850  START TEST bdev_error
00:12:07.850  ************************************
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@1129 -- # error_test_suite ''
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=129262
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 129262'
00:12:07.850  Process error testing pid: 129262
00:12:07.850   05:02:21 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 129262
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 129262 ']'
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:07.850  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:07.850   05:02:21 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:07.850  [2024-11-20 05:02:21.725820] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:07.850  [2024-11-20 05:02:21.726089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129262 ]
00:12:08.109  [2024-11-20 05:02:21.874507] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:08.109  [2024-11-20 05:02:21.900305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:08.109  [2024-11-20 05:02:21.937468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:12:09.142   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.142  Dev_1
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.142   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.142  [
00:12:09.142  {
00:12:09.142  "name": "Dev_1",
00:12:09.142  "aliases": [
00:12:09.142  "6e05274c-f920-4380-bf84-20839709ce75"
00:12:09.142  ],
00:12:09.142  "product_name": "Malloc disk",
00:12:09.142  "block_size": 512,
00:12:09.142  "num_blocks": 262144,
00:12:09.142  "uuid": "6e05274c-f920-4380-bf84-20839709ce75",
00:12:09.142  "assigned_rate_limits": {
00:12:09.142  "rw_ios_per_sec": 0,
00:12:09.142  "rw_mbytes_per_sec": 0,
00:12:09.142  "r_mbytes_per_sec": 0,
00:12:09.142  "w_mbytes_per_sec": 0
00:12:09.142  },
00:12:09.142  "claimed": false,
00:12:09.142  "zoned": false,
00:12:09.142  "supported_io_types": {
00:12:09.142  "read": true,
00:12:09.142  "write": true,
00:12:09.142  "unmap": true,
00:12:09.142  "flush": true,
00:12:09.142  "reset": true,
00:12:09.142  "nvme_admin": false,
00:12:09.142  "nvme_io": false,
00:12:09.142  "nvme_io_md": false,
00:12:09.142  "write_zeroes": true,
00:12:09.142  "zcopy": true,
00:12:09.142  "get_zone_info": false,
00:12:09.142  "zone_management": false,
00:12:09.142  "zone_append": false,
00:12:09.142  "compare": false,
00:12:09.142  "compare_and_write": false,
00:12:09.142  "abort": true,
00:12:09.142  "seek_hole": false,
00:12:09.142  "seek_data": false,
00:12:09.142  "copy": true,
00:12:09.142  "nvme_iov_md": false
00:12:09.142  },
00:12:09.142  "memory_domains": [
00:12:09.142  {
00:12:09.142  "dma_device_id": "system",
00:12:09.142  "dma_device_type": 1
00:12:09.142  },
00:12:09.142  {
00:12:09.142  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.142  "dma_device_type": 2
00:12:09.142  }
00:12:09.142  ],
00:12:09.142  "driver_specific": {}
00:12:09.142  }
00:12:09.142  ]
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.142   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:12:09.142   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.143  true
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.143  Dev_2
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.143  [
00:12:09.143  {
00:12:09.143  "name": "Dev_2",
00:12:09.143  "aliases": [
00:12:09.143  "c0a78706-f16d-4184-910c-89cfa0e6f984"
00:12:09.143  ],
00:12:09.143  "product_name": "Malloc disk",
00:12:09.143  "block_size": 512,
00:12:09.143  "num_blocks": 262144,
00:12:09.143  "uuid": "c0a78706-f16d-4184-910c-89cfa0e6f984",
00:12:09.143  "assigned_rate_limits": {
00:12:09.143  "rw_ios_per_sec": 0,
00:12:09.143  "rw_mbytes_per_sec": 0,
00:12:09.143  "r_mbytes_per_sec": 0,
00:12:09.143  "w_mbytes_per_sec": 0
00:12:09.143  },
00:12:09.143  "claimed": false,
00:12:09.143  "zoned": false,
00:12:09.143  "supported_io_types": {
00:12:09.143  "read": true,
00:12:09.143  "write": true,
00:12:09.143  "unmap": true,
00:12:09.143  "flush": true,
00:12:09.143  "reset": true,
00:12:09.143  "nvme_admin": false,
00:12:09.143  "nvme_io": false,
00:12:09.143  "nvme_io_md": false,
00:12:09.143  "write_zeroes": true,
00:12:09.143  "zcopy": true,
00:12:09.143  "get_zone_info": false,
00:12:09.143  "zone_management": false,
00:12:09.143  "zone_append": false,
00:12:09.143  "compare": false,
00:12:09.143  "compare_and_write": false,
00:12:09.143  "abort": true,
00:12:09.143  "seek_hole": false,
00:12:09.143  "seek_data": false,
00:12:09.143  "copy": true,
00:12:09.143  "nvme_iov_md": false
00:12:09.143  },
00:12:09.143  "memory_domains": [
00:12:09.143  {
00:12:09.143  "dma_device_id": "system",
00:12:09.143  "dma_device_type": 1
00:12:09.143  },
00:12:09.143  {
00:12:09.143  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:09.143  "dma_device_type": 2
00:12:09.143  }
00:12:09.143  ],
00:12:09.143  "driver_specific": {}
00:12:09.143  }
00:12:09.143  ]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:12:09.143   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:09.143   05:02:22 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:09.143   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1
00:12:09.143   05:02:22 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:12:09.143  Running I/O for 5 seconds...
00:12:10.081   05:02:23 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 129262
00:12:10.081  Process is existed as continue on error is set. Pid: 129262
00:12:10.081   05:02:23 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 129262'
00:12:10.081   05:02:23 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.081   05:02:23 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:10.081   05:02:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:10.081   05:02:23 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5
00:12:10.081  Timeout while waiting for response:
00:12:10.081  
00:12:10.081  
00:12:11.019      89579.00 IOPS,   349.92 MiB/s
[2024-11-20T05:02:26.353Z]    101781.50 IOPS,   397.58 MiB/s
[2024-11-20T05:02:27.289Z]    105843.67 IOPS,   413.45 MiB/s
[2024-11-20T05:02:28.225Z]    107826.75 IOPS,   421.20 MiB/s
[2024-11-20T05:02:28.225Z]    109032.60 IOPS,   425.91 MiB/s
00:12:14.268                                                                                                  Latency(us)
00:12:14.268  
[2024-11-20T05:02:28.225Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:14.268  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:12:14.268  	 EE_Dev_1            :       0.93   45510.69     177.78       5.38     0.00     348.87     149.88     848.99
00:12:14.268  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:12:14.268  	 Dev_2               :       5.00  100510.75     392.62       0.00     0.00     156.62      88.44   23116.33
00:12:14.268  
[2024-11-20T05:02:28.225Z]  ===================================================================================================================
00:12:14.268  
[2024-11-20T05:02:28.225Z]  Total                       :             146021.44     570.40       5.38     0.00     171.54      88.44   23116.33
00:12:15.225   05:02:28 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 129262
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' -z 129262 ']'
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # kill -0 129262
00:12:15.225    05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # uname
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:15.225    05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129262
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@960 -- # process_name=reactor_1
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']'
00:12:15.225  killing process with pid 129262
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129262'
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # kill 129262
00:12:15.225  Received shutdown signal, test time was about 5.000000 seconds
00:12:15.225  
00:12:15.225                                                                                                  Latency(us)
00:12:15.225  
[2024-11-20T05:02:29.182Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:15.225  
[2024-11-20T05:02:29.182Z]  ===================================================================================================================
00:12:15.225  
[2024-11-20T05:02:29.182Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:15.225   05:02:28 blockdev_general.bdev_error -- common/autotest_common.sh@978 -- # wait 129262
00:12:15.225   05:02:29 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=129365
00:12:15.225   05:02:29 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:12:15.225  Process error testing pid: 129365
00:12:15.225   05:02:29 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 129365'
00:12:15.225   05:02:29 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 129365
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 129365 ']'
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:15.225  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:15.225   05:02:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:15.483  [2024-11-20 05:02:29.235218] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:15.483  [2024-11-20 05:02:29.235531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129365 ]
00:12:15.483  [2024-11-20 05:02:29.385240] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:15.483  [2024-11-20 05:02:29.408572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:15.742  [2024-11-20 05:02:29.446786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:16.309   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:16.309   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:12:16.309   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:12:16.309   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.309   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568  Dev_1
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568  [
00:12:16.568  {
00:12:16.568  "name": "Dev_1",
00:12:16.568  "aliases": [
00:12:16.568  "d63e627d-c115-4d35-a94c-c67afd785b33"
00:12:16.568  ],
00:12:16.568  "product_name": "Malloc disk",
00:12:16.568  "block_size": 512,
00:12:16.568  "num_blocks": 262144,
00:12:16.568  "uuid": "d63e627d-c115-4d35-a94c-c67afd785b33",
00:12:16.568  "assigned_rate_limits": {
00:12:16.568  "rw_ios_per_sec": 0,
00:12:16.568  "rw_mbytes_per_sec": 0,
00:12:16.568  "r_mbytes_per_sec": 0,
00:12:16.568  "w_mbytes_per_sec": 0
00:12:16.568  },
00:12:16.568  "claimed": false,
00:12:16.568  "zoned": false,
00:12:16.568  "supported_io_types": {
00:12:16.568  "read": true,
00:12:16.568  "write": true,
00:12:16.568  "unmap": true,
00:12:16.568  "flush": true,
00:12:16.568  "reset": true,
00:12:16.568  "nvme_admin": false,
00:12:16.568  "nvme_io": false,
00:12:16.568  "nvme_io_md": false,
00:12:16.568  "write_zeroes": true,
00:12:16.568  "zcopy": true,
00:12:16.568  "get_zone_info": false,
00:12:16.568  "zone_management": false,
00:12:16.568  "zone_append": false,
00:12:16.568  "compare": false,
00:12:16.568  "compare_and_write": false,
00:12:16.568  "abort": true,
00:12:16.568  "seek_hole": false,
00:12:16.568  "seek_data": false,
00:12:16.568  "copy": true,
00:12:16.568  "nvme_iov_md": false
00:12:16.568  },
00:12:16.568  "memory_domains": [
00:12:16.568  {
00:12:16.568  "dma_device_id": "system",
00:12:16.568  "dma_device_type": 1
00:12:16.568  },
00:12:16.568  {
00:12:16.568  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:16.568  "dma_device_type": 2
00:12:16.568  }
00:12:16.568  ],
00:12:16.568  "driver_specific": {}
00:12:16.568  }
00:12:16.568  ]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:12:16.568   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568  true
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568  Dev_2
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.568   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.568  [
00:12:16.568  {
00:12:16.568  "name": "Dev_2",
00:12:16.568  "aliases": [
00:12:16.568  "0062e388-0077-4247-bee5-c43a982aac4f"
00:12:16.568  ],
00:12:16.568  "product_name": "Malloc disk",
00:12:16.568  "block_size": 512,
00:12:16.568  "num_blocks": 262144,
00:12:16.568  "uuid": "0062e388-0077-4247-bee5-c43a982aac4f",
00:12:16.568  "assigned_rate_limits": {
00:12:16.568  "rw_ios_per_sec": 0,
00:12:16.568  "rw_mbytes_per_sec": 0,
00:12:16.568  "r_mbytes_per_sec": 0,
00:12:16.568  "w_mbytes_per_sec": 0
00:12:16.568  },
00:12:16.568  "claimed": false,
00:12:16.568  "zoned": false,
00:12:16.568  "supported_io_types": {
00:12:16.568  "read": true,
00:12:16.568  "write": true,
00:12:16.568  "unmap": true,
00:12:16.569  "flush": true,
00:12:16.569  "reset": true,
00:12:16.569  "nvme_admin": false,
00:12:16.569  "nvme_io": false,
00:12:16.569  "nvme_io_md": false,
00:12:16.569  "write_zeroes": true,
00:12:16.569  "zcopy": true,
00:12:16.569  "get_zone_info": false,
00:12:16.569  "zone_management": false,
00:12:16.569  "zone_append": false,
00:12:16.569  "compare": false,
00:12:16.569  "compare_and_write": false,
00:12:16.569  "abort": true,
00:12:16.569  "seek_hole": false,
00:12:16.569  "seek_data": false,
00:12:16.569  "copy": true,
00:12:16.569  "nvme_iov_md": false
00:12:16.569  },
00:12:16.569  "memory_domains": [
00:12:16.569  {
00:12:16.569  "dma_device_id": "system",
00:12:16.569  "dma_device_type": 1
00:12:16.569  },
00:12:16.569  {
00:12:16.569  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:16.569  "dma_device_type": 2
00:12:16.569  }
00:12:16.569  ],
00:12:16.569  "driver_specific": {}
00:12:16.569  }
00:12:16.569  ]
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:12:16.569   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:16.569   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 129365
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # local es=0
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # valid_exec_arg wait 129365
00:12:16.569   05:02:30 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # local arg=wait
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:16.569    05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # type -t wait
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:16.569   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # wait 129365
00:12:16.569  Running I/O for 5 seconds...
00:12:16.569  task offset: 132928 on job bdev=EE_Dev_1 fails
00:12:16.569  
00:12:16.569                                                                                                  Latency(us)
00:12:16.569  
[2024-11-20T05:02:30.526Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:16.569  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:12:16.569  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:12:16.569  	 EE_Dev_1            :       0.00   23861.17      93.21    5422.99     0.00     448.97     190.84     837.82
00:12:16.569  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:12:16.569  	 Dev_2               :       0.00   19476.57      76.08       0.00     0.00     520.76     172.22     916.01
00:12:16.569  
[2024-11-20T05:02:30.526Z]  ===================================================================================================================
00:12:16.569  
[2024-11-20T05:02:30.526Z]  Total                       :              43337.74     169.29    5422.99     0.00     487.91     172.22     916.01
00:12:16.569  [2024-11-20 05:02:30.516464] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:16.569  request:
00:12:16.569  {
00:12:16.569    "method": "perform_tests",
00:12:16.569    "req_id": 1
00:12:16.569  }
00:12:16.569  Got JSON-RPC error response
00:12:16.569  response:
00:12:16.569  {
00:12:16.569    "code": -32603,
00:12:16.569    "message": "bdevperf failed with error Operation not permitted"
00:12:16.569  }
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # es=255
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@664 -- # es=127
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@665 -- # case "$es" in
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@672 -- # es=1
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:17.137  
00:12:17.137  real	0m9.168s
00:12:17.137  user	0m9.540s
00:12:17.137  sys	0m0.752s
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:17.137   05:02:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:12:17.137  ************************************
00:12:17.137  END TEST bdev_error
00:12:17.137  ************************************
00:12:17.137   05:02:30 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_stat stat_test_suite ''
00:12:17.137   05:02:30 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:17.137   05:02:30 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:17.137   05:02:30 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:17.137  ************************************
00:12:17.137  START TEST bdev_stat
00:12:17.137  ************************************
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@1129 -- # stat_test_suite ''
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=129411
00:12:17.137  Process Bdev IO statistics testing pid: 129411
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 129411'
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 129411
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # '[' -z 129411 ']'
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:17.137  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:17.137   05:02:30 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:17.137  [2024-11-20 05:02:30.950334] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:17.137  [2024-11-20 05:02:30.950605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129411 ]
00:12:17.397  [2024-11-20 05:02:31.111049] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:17.397  [2024-11-20 05:02:31.140590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:12:17.397  [2024-11-20 05:02:31.187953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:17.397  [2024-11-20 05:02:31.187962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:17.964   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@868 -- # return 0
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:18.223  Malloc_STAT
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_STAT
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # local i
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:18.223  [
00:12:18.223  {
00:12:18.223  "name": "Malloc_STAT",
00:12:18.223  "aliases": [
00:12:18.223  "201d32f8-6dc7-4442-8000-21555c178899"
00:12:18.223  ],
00:12:18.223  "product_name": "Malloc disk",
00:12:18.223  "block_size": 512,
00:12:18.223  "num_blocks": 262144,
00:12:18.223  "uuid": "201d32f8-6dc7-4442-8000-21555c178899",
00:12:18.223  "assigned_rate_limits": {
00:12:18.223  "rw_ios_per_sec": 0,
00:12:18.223  "rw_mbytes_per_sec": 0,
00:12:18.223  "r_mbytes_per_sec": 0,
00:12:18.223  "w_mbytes_per_sec": 0
00:12:18.223  },
00:12:18.223  "claimed": false,
00:12:18.223  "zoned": false,
00:12:18.223  "supported_io_types": {
00:12:18.223  "read": true,
00:12:18.223  "write": true,
00:12:18.223  "unmap": true,
00:12:18.223  "flush": true,
00:12:18.223  "reset": true,
00:12:18.223  "nvme_admin": false,
00:12:18.223  "nvme_io": false,
00:12:18.223  "nvme_io_md": false,
00:12:18.223  "write_zeroes": true,
00:12:18.223  "zcopy": true,
00:12:18.223  "get_zone_info": false,
00:12:18.223  "zone_management": false,
00:12:18.223  "zone_append": false,
00:12:18.223  "compare": false,
00:12:18.223  "compare_and_write": false,
00:12:18.223  "abort": true,
00:12:18.223  "seek_hole": false,
00:12:18.223  "seek_data": false,
00:12:18.223  "copy": true,
00:12:18.223  "nvme_iov_md": false
00:12:18.223  },
00:12:18.223  "memory_domains": [
00:12:18.223  {
00:12:18.223  "dma_device_id": "system",
00:12:18.223  "dma_device_type": 1
00:12:18.223  },
00:12:18.223  {
00:12:18.223  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:12:18.223  "dma_device_type": 2
00:12:18.223  }
00:12:18.223  ],
00:12:18.223  "driver_specific": {}
00:12:18.223  }
00:12:18.223  ]
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- common/autotest_common.sh@911 -- # return 0
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2
00:12:18.223   05:02:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:12:18.223  Running I/O for 10 seconds...
00:12:20.094     127232.00 IOPS,   497.00 MiB/s
[2024-11-20T05:02:34.051Z]  05:02:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT
00:12:20.094   05:02:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT
00:12:20.094   05:02:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats
00:12:20.094   05:02:33 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0
00:12:20.094    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:12:20.094    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:20.094    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:20.094    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:20.094   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{
00:12:20.094  "tick_rate": 2200000000,
00:12:20.094  "ticks": 1323944467199,
00:12:20.094  "bdevs": [
00:12:20.094  {
00:12:20.094  "name": "Malloc_STAT",
00:12:20.094  "bytes_read": 1006670336,
00:12:20.094  "num_read_ops": 245763,
00:12:20.094  "bytes_written": 0,
00:12:20.094  "num_write_ops": 0,
00:12:20.094  "bytes_unmapped": 0,
00:12:20.094  "num_unmap_ops": 0,
00:12:20.094  "bytes_copied": 0,
00:12:20.094  "num_copy_ops": 0,
00:12:20.094  "read_latency_ticks": 2173637327505,
00:12:20.094  "max_read_latency_ticks": 12058228,
00:12:20.094  "min_read_latency_ticks": 517460,
00:12:20.094  "write_latency_ticks": 0,
00:12:20.094  "max_write_latency_ticks": 0,
00:12:20.094  "min_write_latency_ticks": 0,
00:12:20.094  "unmap_latency_ticks": 0,
00:12:20.094  "max_unmap_latency_ticks": 0,
00:12:20.094  "min_unmap_latency_ticks": 0,
00:12:20.094  "copy_latency_ticks": 0,
00:12:20.094  "max_copy_latency_ticks": 0,
00:12:20.094  "min_copy_latency_ticks": 0,
00:12:20.094  "io_error": {}
00:12:20.094  }
00:12:20.094  ]
00:12:20.094  }'
00:12:20.094    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops'
00:12:20.352   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=245763
00:12:20.352    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:12:20.352    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:20.352    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:20.352     126976.00 IOPS,   496.00 MiB/s
[2024-11-20T05:02:34.309Z]   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:20.352   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{
00:12:20.353  "tick_rate": 2200000000,
00:12:20.353  "ticks": 1324086654311,
00:12:20.353  "name": "Malloc_STAT",
00:12:20.353  "channels": [
00:12:20.353  {
00:12:20.353  "thread_id": 2,
00:12:20.353  "bytes_read": 516947968,
00:12:20.353  "num_read_ops": 126208,
00:12:20.353  "bytes_written": 0,
00:12:20.353  "num_write_ops": 0,
00:12:20.353  "bytes_unmapped": 0,
00:12:20.353  "num_unmap_ops": 0,
00:12:20.353  "bytes_copied": 0,
00:12:20.353  "num_copy_ops": 0,
00:12:20.353  "read_latency_ticks": 1121914537848,
00:12:20.353  "max_read_latency_ticks": 12019032,
00:12:20.353  "min_read_latency_ticks": 6314694,
00:12:20.353  "write_latency_ticks": 0,
00:12:20.353  "max_write_latency_ticks": 0,
00:12:20.353  "min_write_latency_ticks": 0,
00:12:20.353  "unmap_latency_ticks": 0,
00:12:20.353  "max_unmap_latency_ticks": 0,
00:12:20.353  "min_unmap_latency_ticks": 0,
00:12:20.353  "copy_latency_ticks": 0,
00:12:20.353  "max_copy_latency_ticks": 0,
00:12:20.353  "min_copy_latency_ticks": 0
00:12:20.353  },
00:12:20.353  {
00:12:20.353  "thread_id": 3,
00:12:20.353  "bytes_read": 521142272,
00:12:20.353  "num_read_ops": 127232,
00:12:20.353  "bytes_written": 0,
00:12:20.353  "num_write_ops": 0,
00:12:20.353  "bytes_unmapped": 0,
00:12:20.353  "num_unmap_ops": 0,
00:12:20.353  "bytes_copied": 0,
00:12:20.353  "num_copy_ops": 0,
00:12:20.353  "read_latency_ticks": 1123953067229,
00:12:20.353  "max_read_latency_ticks": 12058228,
00:12:20.353  "min_read_latency_ticks": 6019510,
00:12:20.353  "write_latency_ticks": 0,
00:12:20.353  "max_write_latency_ticks": 0,
00:12:20.353  "min_write_latency_ticks": 0,
00:12:20.353  "unmap_latency_ticks": 0,
00:12:20.353  "max_unmap_latency_ticks": 0,
00:12:20.353  "min_unmap_latency_ticks": 0,
00:12:20.353  "copy_latency_ticks": 0,
00:12:20.353  "max_copy_latency_ticks": 0,
00:12:20.353  "min_copy_latency_ticks": 0
00:12:20.353  }
00:12:20.353  ]
00:12:20.353  }'
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=126208
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=126208
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=127232
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=253440
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{
00:12:20.353  "tick_rate": 2200000000,
00:12:20.353  "ticks": 1324342971597,
00:12:20.353  "bdevs": [
00:12:20.353  {
00:12:20.353  "name": "Malloc_STAT",
00:12:20.353  "bytes_read": 1094750720,
00:12:20.353  "num_read_ops": 267267,
00:12:20.353  "bytes_written": 0,
00:12:20.353  "num_write_ops": 0,
00:12:20.353  "bytes_unmapped": 0,
00:12:20.353  "num_unmap_ops": 0,
00:12:20.353  "bytes_copied": 0,
00:12:20.353  "num_copy_ops": 0,
00:12:20.353  "read_latency_ticks": 2376622301038,
00:12:20.353  "max_read_latency_ticks": 13791414,
00:12:20.353  "min_read_latency_ticks": 517460,
00:12:20.353  "write_latency_ticks": 0,
00:12:20.353  "max_write_latency_ticks": 0,
00:12:20.353  "min_write_latency_ticks": 0,
00:12:20.353  "unmap_latency_ticks": 0,
00:12:20.353  "max_unmap_latency_ticks": 0,
00:12:20.353  "min_unmap_latency_ticks": 0,
00:12:20.353  "copy_latency_ticks": 0,
00:12:20.353  "max_copy_latency_ticks": 0,
00:12:20.353  "min_copy_latency_ticks": 0,
00:12:20.353  "io_error": {}
00:12:20.353  }
00:12:20.353  ]
00:12:20.353  }'
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=267267
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 253440 -lt 245763 ']'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 253440 -gt 267267 ']'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:20.353  
00:12:20.353                                                                                                  Latency(us)
00:12:20.353  
[2024-11-20T05:02:34.310Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:20.353  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:12:20.353  	 Malloc_STAT         :       2.19   62909.88     245.74       0.00     0.00    4059.48     960.70    5719.51
00:12:20.353  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:12:20.353  	 Malloc_STAT         :       2.19   63349.18     247.46       0.00     0.00    4031.72     722.39    6285.50
00:12:20.353  
[2024-11-20T05:02:34.310Z]  ===================================================================================================================
00:12:20.353  
[2024-11-20T05:02:34.310Z]  Total                       :             126259.06     493.20       0.00     0.00    4045.55     722.39    6285.50
00:12:20.353  {
00:12:20.353    "results": [
00:12:20.353      {
00:12:20.353        "job": "Malloc_STAT",
00:12:20.353        "core_mask": "0x1",
00:12:20.353        "workload": "randread",
00:12:20.353        "status": "finished",
00:12:20.353        "queue_depth": 256,
00:12:20.353        "io_size": 4096,
00:12:20.353        "runtime": 2.185221,
00:12:20.353        "iops": 62909.884171898404,
00:12:20.353        "mibps": 245.74173504647814,
00:12:20.353        "io_failed": 0,
00:12:20.353        "io_timeout": 0,
00:12:20.353        "avg_latency_us": 4059.477772134755,
00:12:20.353        "min_latency_us": 960.6981818181819,
00:12:20.353        "max_latency_us": 5719.505454545455
00:12:20.353      },
00:12:20.353      {
00:12:20.353        "job": "Malloc_STAT",
00:12:20.353        "core_mask": "0x2",
00:12:20.353        "workload": "randread",
00:12:20.353        "status": "finished",
00:12:20.353        "queue_depth": 256,
00:12:20.353        "io_size": 4096,
00:12:20.353        "runtime": 2.186232,
00:12:20.353        "iops": 63349.177946347874,
00:12:20.353        "mibps": 247.45772635292138,
00:12:20.353        "io_failed": 0,
00:12:20.353        "io_timeout": 0,
00:12:20.353        "avg_latency_us": 4031.7242547471014,
00:12:20.353        "min_latency_us": 722.3854545454545,
00:12:20.353        "max_latency_us": 6285.498181818181
00:12:20.353      }
00:12:20.353    ],
00:12:20.353    "core_count": 2
00:12:20.353  }
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 129411
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' -z 129411 ']'
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # kill -0 129411
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # uname
00:12:20.353   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:20.353    05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129411
00:12:20.612   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:20.612   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:20.612  killing process with pid 129411
00:12:20.612   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129411'
00:12:20.612   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # kill 129411
00:12:20.612  Received shutdown signal, test time was about 2.242304 seconds
00:12:20.612  
00:12:20.612                                                                                                  Latency(us)
00:12:20.612  
[2024-11-20T05:02:34.569Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:20.613  
[2024-11-20T05:02:34.570Z]  ===================================================================================================================
00:12:20.613  
[2024-11-20T05:02:34.570Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:12:20.613   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@978 -- # wait 129411
00:12:20.871   05:02:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT
00:12:20.871  
00:12:20.871  real	0m3.683s
00:12:20.871  user	0m7.395s
00:12:20.871  sys	0m0.334s
00:12:20.871   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:20.871   05:02:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:12:20.871  ************************************
00:12:20.871  END TEST bdev_stat
00:12:20.871  ************************************
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@793 -- # [[ bdev == gpt ]]
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@797 -- # [[ bdev == crypto_sw ]]
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@810 -- # cleanup
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]]
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]]
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]]
00:12:20.871   05:02:34 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]]
00:12:20.871  
00:12:20.871  real	1m54.566s
00:12:20.871  user	5m11.769s
00:12:20.871  sys	0m20.372s
00:12:20.871   05:02:34 blockdev_general -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:20.872   05:02:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:12:20.872  ************************************
00:12:20.872  END TEST blockdev_general
00:12:20.872  ************************************
00:12:20.872   05:02:34  -- spdk/autotest.sh@181 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:12:20.872   05:02:34  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:20.872   05:02:34  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:20.872   05:02:34  -- common/autotest_common.sh@10 -- # set +x
00:12:20.872  ************************************
00:12:20.872  START TEST bdevperf_config
00:12:20.872  ************************************
00:12:20.872   05:02:34 bdevperf_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:12:20.872  * Looking for test storage...
00:12:20.872  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:12:20.872    05:02:34 bdevperf_config -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:20.872     05:02:34 bdevperf_config -- common/autotest_common.sh@1693 -- # lcov --version
00:12:20.872     05:02:34 bdevperf_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:20.872    05:02:34 bdevperf_config -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@336 -- # IFS=.-:
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@336 -- # read -ra ver1
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@337 -- # IFS=.-:
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@337 -- # read -ra ver2
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@338 -- # local 'op=<'
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@340 -- # ver1_l=2
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@341 -- # ver2_l=1
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@344 -- # case "$op" in
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@345 -- # : 1
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:20.872     05:02:34 bdevperf_config -- scripts/common.sh@365 -- # decimal 1
00:12:20.872     05:02:34 bdevperf_config -- scripts/common.sh@353 -- # local d=1
00:12:20.872     05:02:34 bdevperf_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:20.872     05:02:34 bdevperf_config -- scripts/common.sh@355 -- # echo 1
00:12:20.872    05:02:34 bdevperf_config -- scripts/common.sh@365 -- # ver1[v]=1
00:12:21.130     05:02:34 bdevperf_config -- scripts/common.sh@366 -- # decimal 2
00:12:21.130     05:02:34 bdevperf_config -- scripts/common.sh@353 -- # local d=2
00:12:21.130     05:02:34 bdevperf_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:21.130     05:02:34 bdevperf_config -- scripts/common.sh@355 -- # echo 2
00:12:21.130    05:02:34 bdevperf_config -- scripts/common.sh@366 -- # ver2[v]=2
00:12:21.131    05:02:34 bdevperf_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:21.131    05:02:34 bdevperf_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:21.131    05:02:34 bdevperf_config -- scripts/common.sh@368 -- # return 0
00:12:21.131    05:02:34 bdevperf_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:21.131    05:02:34 bdevperf_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:21.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.131  		--rc genhtml_branch_coverage=1
00:12:21.131  		--rc genhtml_function_coverage=1
00:12:21.131  		--rc genhtml_legend=1
00:12:21.131  		--rc geninfo_all_blocks=1
00:12:21.131  		--rc geninfo_unexecuted_blocks=1
00:12:21.131  		
00:12:21.131  		'
00:12:21.131    05:02:34 bdevperf_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:21.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.131  		--rc genhtml_branch_coverage=1
00:12:21.131  		--rc genhtml_function_coverage=1
00:12:21.131  		--rc genhtml_legend=1
00:12:21.131  		--rc geninfo_all_blocks=1
00:12:21.131  		--rc geninfo_unexecuted_blocks=1
00:12:21.131  		
00:12:21.131  		'
00:12:21.131    05:02:34 bdevperf_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:21.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.131  		--rc genhtml_branch_coverage=1
00:12:21.131  		--rc genhtml_function_coverage=1
00:12:21.131  		--rc genhtml_legend=1
00:12:21.131  		--rc geninfo_all_blocks=1
00:12:21.131  		--rc geninfo_unexecuted_blocks=1
00:12:21.131  		
00:12:21.131  		'
00:12:21.131    05:02:34 bdevperf_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:21.131  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:21.131  		--rc genhtml_branch_coverage=1
00:12:21.131  		--rc genhtml_function_coverage=1
00:12:21.131  		--rc genhtml_legend=1
00:12:21.131  		--rc geninfo_all_blocks=1
00:12:21.131  		--rc geninfo_unexecuted_blocks=1
00:12:21.131  		
00:12:21.131  		'
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:12:21.131    05:02:34 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:12:21.131  
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:12:21.131  
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:12:21.131  
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:12:21.131  
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:12:21.131  
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:21.131   05:02:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:21.131    05:02:34 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:23.665   05:02:37 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-11-20 05:02:34.906038] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:23.665  [2024-11-20 05:02:34.906271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129564 ]
00:12:23.665  Using job config with 4 jobs
00:12:23.665  [2024-11-20 05:02:35.040747] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:23.665  [2024-11-20 05:02:35.067523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.665  [2024-11-20 05:02:35.097800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.665  cpumask for '\''job0'\'' is too big
00:12:23.665  cpumask for '\''job1'\'' is too big
00:12:23.665  cpumask for '\''job2'\'' is too big
00:12:23.665  cpumask for '\''job3'\'' is too big
00:12:23.665  Running I/O for 2 seconds...
00:12:23.665     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:37.622Z]    125952.00 IOPS,   123.00 MiB/s
00:12:23.665                                                                                                  Latency(us)
00:12:23.665  
[2024-11-20T05:02:37.622Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:23.665  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.665  	 Malloc0             :       2.02   31479.62      30.74       0.00     0.00    8120.06    1496.90   12809.31
00:12:23.665  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.665  	 Malloc0             :       2.02   31458.64      30.72       0.00     0.00    8111.87    1496.90   11260.28
00:12:23.665  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.665  	 Malloc0             :       2.02   31438.25      30.70       0.00     0.00    8101.86    1534.14   12034.79
00:12:23.665  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.665  	 Malloc0             :       2.02   31417.74      30.68       0.00     0.00    8093.78    1482.01   13405.09
00:12:23.665  
[2024-11-20T05:02:37.622Z]  ===================================================================================================================
00:12:23.665  
[2024-11-20T05:02:37.622Z]  Total                       :             125794.25     122.85       0.00     0.00    8106.89    1482.01   13405.09'
00:12:23.665    05:02:37 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-11-20 05:02:34.906038] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:23.665  [2024-11-20 05:02:34.906271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129564 ]
00:12:23.665  Using job config with 4 jobs
00:12:23.665  [2024-11-20 05:02:35.040747] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:23.665  [2024-11-20 05:02:35.067523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.665  [2024-11-20 05:02:35.097800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.665  cpumask for '\''job0'\'' is too big
00:12:23.665  cpumask for '\''job1'\'' is too big
00:12:23.665  cpumask for '\''job2'\'' is too big
00:12:23.666  cpumask for '\''job3'\'' is too big
00:12:23.666  Running I/O for 2 seconds...
00:12:23.666     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:37.623Z]    125952.00 IOPS,   123.00 MiB/s
00:12:23.666                                                                                                  Latency(us)
00:12:23.666  
[2024-11-20T05:02:37.623Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31479.62      30.74       0.00     0.00    8120.06    1496.90   12809.31
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31458.64      30.72       0.00     0.00    8111.87    1496.90   11260.28
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31438.25      30.70       0.00     0.00    8101.86    1534.14   12034.79
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31417.74      30.68       0.00     0.00    8093.78    1482.01   13405.09
00:12:23.666  
[2024-11-20T05:02:37.623Z]  ===================================================================================================================
00:12:23.666  
[2024-11-20T05:02:37.623Z]  Total                       :             125794.25     122.85       0.00     0.00    8106.89    1482.01   13405.09'
00:12:23.666    05:02:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:12:23.666    05:02:37 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 05:02:34.906038] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:23.666  [2024-11-20 05:02:34.906271] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129564 ]
00:12:23.666  Using job config with 4 jobs
00:12:23.666  [2024-11-20 05:02:35.040747] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:23.666  [2024-11-20 05:02:35.067523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.666  [2024-11-20 05:02:35.097800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:23.666  cpumask for '\''job0'\'' is too big
00:12:23.666  cpumask for '\''job1'\'' is too big
00:12:23.666  cpumask for '\''job2'\'' is too big
00:12:23.666  cpumask for '\''job3'\'' is too big
00:12:23.666  Running I/O for 2 seconds...
00:12:23.666     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:37.623Z]    125952.00 IOPS,   123.00 MiB/s
00:12:23.666                                                                                                  Latency(us)
00:12:23.666  
[2024-11-20T05:02:37.623Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31479.62      30.74       0.00     0.00    8120.06    1496.90   12809.31
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31458.64      30.72       0.00     0.00    8111.87    1496.90   11260.28
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31438.25      30.70       0.00     0.00    8101.86    1534.14   12034.79
00:12:23.666  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:23.666  	 Malloc0             :       2.02   31417.74      30.68       0.00     0.00    8093.78    1482.01   13405.09
00:12:23.666  
[2024-11-20T05:02:37.623Z]  ===================================================================================================================
00:12:23.666  
[2024-11-20T05:02:37.623Z]  Total                       :             125794.25     122.85       0.00     0.00    8106.89    1482.01   13405.09'
00:12:23.666    05:02:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:12:23.666   05:02:37 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:12:23.666    05:02:37 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:23.666  [2024-11-20 05:02:37.583036] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:23.666  [2024-11-20 05:02:37.583308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129600 ]
00:12:23.925  [2024-11-20 05:02:37.732922] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:23.925  [2024-11-20 05:02:37.753545] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:23.925  [2024-11-20 05:02:37.788825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:24.183  cpumask for 'job0' is too big
00:12:24.183  cpumask for 'job1' is too big
00:12:24.183  cpumask for 'job2' is too big
00:12:24.183  cpumask for 'job3' is too big
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:12:26.716  Running I/O for 2 seconds...
00:12:26.716     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:40.673Z]    126464.00 IOPS,   123.50 MiB/s
00:12:26.716                                                                                                  Latency(us)
00:12:26.716  
[2024-11-20T05:02:40.673Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:26.716  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:26.716  	 Malloc0             :       2.02   31572.74      30.83       0.00     0.00    8106.80    1489.45   12690.15
00:12:26.716  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:26.716  	 Malloc0             :       2.02   31551.81      30.81       0.00     0.00    8098.07    1459.67   11379.43
00:12:26.716  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:26.716  	 Malloc0             :       2.02   31527.27      30.79       0.00     0.00    8090.20    1467.11   12749.73
00:12:26.716  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:12:26.716  	 Malloc0             :       2.02   31506.88      30.77       0.00     0.00    8082.44    1459.67   14179.61
00:12:26.716  
[2024-11-20T05:02:40.673Z]  ===================================================================================================================
00:12:26.716  
[2024-11-20T05:02:40.673Z]  Total                       :             126158.70     123.20       0.00     0.00    8094.38    1459.67   14179.61'
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:12:26.716  
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:26.716  
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:12:26.716  
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:26.716   05:02:40 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:26.716    05:02:40 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-11-20 05:02:40.296730] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:29.251  [2024-11-20 05:02:40.297023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129647 ]
00:12:29.251  Using job config with 3 jobs
00:12:29.251  [2024-11-20 05:02:40.447097] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:29.251  [2024-11-20 05:02:40.475015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:29.251  [2024-11-20 05:02:40.507219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:29.251  cpumask for '\''job0'\'' is too big
00:12:29.251  cpumask for '\''job1'\'' is too big
00:12:29.251  cpumask for '\''job2'\'' is too big
00:12:29.251  Running I/O for 2 seconds...
00:12:29.251     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:43.208Z]    126336.00 IOPS,   123.38 MiB/s
00:12:29.251                                                                                                  Latency(us)
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42112.55      41.13       0.00     0.00    6072.51    1601.16    9592.09
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42076.56      41.09       0.00     0.00    6065.96    1549.03    9651.67
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.02   42040.63      41.06       0.00     0.00    6060.16    1608.61   11081.54
00:12:29.251  
[2024-11-20T05:02:43.208Z]  ===================================================================================================================
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Total                       :             126229.73     123.27       0.00     0.00    6066.21    1549.03   11081.54'
00:12:29.251    05:02:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-11-20 05:02:40.296730] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:29.251  [2024-11-20 05:02:40.297023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129647 ]
00:12:29.251  Using job config with 3 jobs
00:12:29.251  [2024-11-20 05:02:40.447097] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:29.251  [2024-11-20 05:02:40.475015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:29.251  [2024-11-20 05:02:40.507219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:29.251  cpumask for '\''job0'\'' is too big
00:12:29.251  cpumask for '\''job1'\'' is too big
00:12:29.251  cpumask for '\''job2'\'' is too big
00:12:29.251  Running I/O for 2 seconds...
00:12:29.251     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:43.208Z]    126336.00 IOPS,   123.38 MiB/s
00:12:29.251                                                                                                  Latency(us)
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42112.55      41.13       0.00     0.00    6072.51    1601.16    9592.09
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42076.56      41.09       0.00     0.00    6065.96    1549.03    9651.67
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.02   42040.63      41.06       0.00     0.00    6060.16    1608.61   11081.54
00:12:29.251  
[2024-11-20T05:02:43.208Z]  ===================================================================================================================
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Total                       :             126229.73     123.27       0.00     0.00    6066.21    1549.03   11081.54'
00:12:29.251    05:02:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 05:02:40.296730] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:29.251  [2024-11-20 05:02:40.297023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129647 ]
00:12:29.251  Using job config with 3 jobs
00:12:29.251  [2024-11-20 05:02:40.447097] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:29.251  [2024-11-20 05:02:40.475015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:29.251  [2024-11-20 05:02:40.507219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:29.251  cpumask for '\''job0'\'' is too big
00:12:29.251  cpumask for '\''job1'\'' is too big
00:12:29.251  cpumask for '\''job2'\'' is too big
00:12:29.251  Running I/O for 2 seconds...
00:12:29.251     125952.00 IOPS,   123.00 MiB/s
[2024-11-20T05:02:43.208Z]    126336.00 IOPS,   123.38 MiB/s
00:12:29.251                                                                                                  Latency(us)
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42112.55      41.13       0.00     0.00    6072.51    1601.16    9592.09
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.01   42076.56      41.09       0.00     0.00    6065.96    1549.03    9651.67
00:12:29.251  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:12:29.251  	 Malloc0             :       2.02   42040.63      41.06       0.00     0.00    6060.16    1608.61   11081.54
00:12:29.251  
[2024-11-20T05:02:43.208Z]  ===================================================================================================================
00:12:29.251  
[2024-11-20T05:02:43.208Z]  Total                       :             126229.73     123.27       0.00     0.00    6066.21    1549.03   11081.54'
00:12:29.251    05:02:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:12:29.251    05:02:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:12:29.251  
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:12:29.251  
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:12:29.251  
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:29.251   05:02:42 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:12:29.252  
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:12:29.252  
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:12:29.252   05:02:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:12:29.252    05:02:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:31.785   05:02:45 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-11-20 05:02:43.038577] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:31.785  [2024-11-20 05:02:43.038844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129693 ]
00:12:31.785  Using job config with 4 jobs
00:12:31.785  [2024-11-20 05:02:43.187321] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:31.785  [2024-11-20 05:02:43.208571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.785  [2024-11-20 05:02:43.243572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.785  cpumask for '\''job0'\'' is too big
00:12:31.785  cpumask for '\''job1'\'' is too big
00:12:31.785  cpumask for '\''job2'\'' is too big
00:12:31.785  cpumask for '\''job3'\'' is too big
00:12:31.785  Running I/O for 2 seconds...
00:12:31.785     129024.00 IOPS,   126.00 MiB/s
[2024-11-20T05:02:45.742Z]    126464.00 IOPS,   123.50 MiB/s
00:12:31.785                                                                                                  Latency(us)
00:12:31.785  
[2024-11-20T05:02:45.742Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:31.785  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc0             :       2.04   15595.01      15.23       0.00     0.00   16418.37    3083.17   25499.46
00:12:31.785  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc1             :       2.04   15584.13      15.22       0.00     0.00   16416.73    3619.37   25380.31
00:12:31.785  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc0             :       2.04   15563.54      15.20       0.00     0.00   16394.98    3038.49   22401.40
00:12:31.785  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc1             :       2.04   15553.20      15.19       0.00     0.00   16393.51    3544.90   22401.40
00:12:31.785  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc0             :       2.04   15543.08      15.18       0.00     0.00   16360.86    3038.49   19303.33
00:12:31.785  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc1             :       2.04   15532.81      15.17       0.00     0.00   16358.39    3559.80   19303.33
00:12:31.785  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc0             :       2.05   15522.33      15.16       0.00     0.00   16324.22    3038.49   18826.71
00:12:31.785  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.785  	 Malloc1             :       2.05   15511.48      15.15       0.00     0.00   16325.18    3559.80   18826.71
00:12:31.785  
[2024-11-20T05:02:45.743Z]  ===================================================================================================================
00:12:31.786  
[2024-11-20T05:02:45.743Z]  Total                       :             124405.57     121.49       0.00     0.00   16374.03    3038.49   25499.46'
00:12:31.786    05:02:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-11-20 05:02:43.038577] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:31.786  [2024-11-20 05:02:43.038844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129693 ]
00:12:31.786  Using job config with 4 jobs
00:12:31.786  [2024-11-20 05:02:43.187321] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:31.786  [2024-11-20 05:02:43.208571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.786  [2024-11-20 05:02:43.243572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.786  cpumask for '\''job0'\'' is too big
00:12:31.786  cpumask for '\''job1'\'' is too big
00:12:31.786  cpumask for '\''job2'\'' is too big
00:12:31.786  cpumask for '\''job3'\'' is too big
00:12:31.786  Running I/O for 2 seconds...
00:12:31.786     129024.00 IOPS,   126.00 MiB/s
[2024-11-20T05:02:45.743Z]    126464.00 IOPS,   123.50 MiB/s
00:12:31.786                                                                                                  Latency(us)
00:12:31.786  
[2024-11-20T05:02:45.743Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15595.01      15.23       0.00     0.00   16418.37    3083.17   25499.46
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15584.13      15.22       0.00     0.00   16416.73    3619.37   25380.31
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15563.54      15.20       0.00     0.00   16394.98    3038.49   22401.40
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15553.20      15.19       0.00     0.00   16393.51    3544.90   22401.40
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15543.08      15.18       0.00     0.00   16360.86    3038.49   19303.33
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15532.81      15.17       0.00     0.00   16358.39    3559.80   19303.33
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.05   15522.33      15.16       0.00     0.00   16324.22    3038.49   18826.71
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.05   15511.48      15.15       0.00     0.00   16325.18    3559.80   18826.71
00:12:31.786  
[2024-11-20T05:02:45.743Z]  ===================================================================================================================
00:12:31.786  
[2024-11-20T05:02:45.743Z]  Total                       :             124405.57     121.49       0.00     0.00   16374.03    3038.49   25499.46'
00:12:31.786    05:02:45 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-11-20 05:02:43.038577] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:31.786  [2024-11-20 05:02:43.038844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129693 ]
00:12:31.786  Using job config with 4 jobs
00:12:31.786  [2024-11-20 05:02:43.187321] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:31.786  [2024-11-20 05:02:43.208571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:31.786  [2024-11-20 05:02:43.243572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:31.786  cpumask for '\''job0'\'' is too big
00:12:31.786  cpumask for '\''job1'\'' is too big
00:12:31.786  cpumask for '\''job2'\'' is too big
00:12:31.786  cpumask for '\''job3'\'' is too big
00:12:31.786  Running I/O for 2 seconds...
00:12:31.786     129024.00 IOPS,   126.00 MiB/s
[2024-11-20T05:02:45.743Z]    126464.00 IOPS,   123.50 MiB/s
00:12:31.786                                                                                                  Latency(us)
00:12:31.786  
[2024-11-20T05:02:45.743Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15595.01      15.23       0.00     0.00   16418.37    3083.17   25499.46
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15584.13      15.22       0.00     0.00   16416.73    3619.37   25380.31
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15563.54      15.20       0.00     0.00   16394.98    3038.49   22401.40
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15553.20      15.19       0.00     0.00   16393.51    3544.90   22401.40
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.04   15543.08      15.18       0.00     0.00   16360.86    3038.49   19303.33
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.04   15532.81      15.17       0.00     0.00   16358.39    3559.80   19303.33
00:12:31.786  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc0             :       2.05   15522.33      15.16       0.00     0.00   16324.22    3038.49   18826.71
00:12:31.786  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:12:31.786  	 Malloc1             :       2.05   15511.48      15.15       0.00     0.00   16325.18    3559.80   18826.71
00:12:31.786  
[2024-11-20T05:02:45.743Z]  ===================================================================================================================
00:12:31.786  
[2024-11-20T05:02:45.743Z]  Total                       :             124405.57     121.49       0.00     0.00   16374.03    3038.49   25499.46'
00:12:31.786    05:02:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:12:31.786    05:02:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:12:31.786   05:02:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:12:31.786   05:02:45 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup
00:12:31.786   05:02:45 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:12:31.786   05:02:45 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:12:31.786  
00:12:31.786  real	0m11.049s
00:12:31.786  user	0m9.596s
00:12:31.786  sys	0m0.967s
00:12:31.786   05:02:45 bdevperf_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:31.786   05:02:45 bdevperf_config -- common/autotest_common.sh@10 -- # set +x
00:12:31.786  ************************************
00:12:31.786  END TEST bdevperf_config
00:12:31.786  ************************************
00:12:32.075    05:02:45  -- spdk/autotest.sh@182 -- # uname -s
00:12:32.075   05:02:45  -- spdk/autotest.sh@182 -- # [[ Linux == Linux ]]
00:12:32.075   05:02:45  -- spdk/autotest.sh@183 -- # run_test reactor_set_interrupt /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:32.075   05:02:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:32.075   05:02:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:32.075   05:02:45  -- common/autotest_common.sh@10 -- # set +x
00:12:32.075  ************************************
00:12:32.075  START TEST reactor_set_interrupt
00:12:32.075  ************************************
00:12:32.075   05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:32.075  * Looking for test storage...
00:12:32.075  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.075    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:32.075     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version
00:12:32.075     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:32.075    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:12:32.075    05:02:45 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:32.076     05:02:45 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:32.076    05:02:45 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:12:32.076    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:32.076    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:32.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.076  		--rc genhtml_branch_coverage=1
00:12:32.076  		--rc genhtml_function_coverage=1
00:12:32.076  		--rc genhtml_legend=1
00:12:32.076  		--rc geninfo_all_blocks=1
00:12:32.076  		--rc geninfo_unexecuted_blocks=1
00:12:32.076  		
00:12:32.076  		'
00:12:32.076    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:32.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.076  		--rc genhtml_branch_coverage=1
00:12:32.076  		--rc genhtml_function_coverage=1
00:12:32.076  		--rc genhtml_legend=1
00:12:32.076  		--rc geninfo_all_blocks=1
00:12:32.076  		--rc geninfo_unexecuted_blocks=1
00:12:32.076  		
00:12:32.076  		'
00:12:32.076    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:32.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.076  		--rc genhtml_branch_coverage=1
00:12:32.076  		--rc genhtml_function_coverage=1
00:12:32.076  		--rc genhtml_legend=1
00:12:32.076  		--rc geninfo_all_blocks=1
00:12:32.076  		--rc geninfo_unexecuted_blocks=1
00:12:32.076  		
00:12:32.076  		'
00:12:32.076    05:02:45 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:32.076  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.076  		--rc genhtml_branch_coverage=1
00:12:32.076  		--rc genhtml_function_coverage=1
00:12:32.076  		--rc genhtml_legend=1
00:12:32.076  		--rc geninfo_all_blocks=1
00:12:32.076  		--rc geninfo_unexecuted_blocks=1
00:12:32.076  		
00:12:32.076  		'
00:12:32.076   05:02:45 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:12:32.076      05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reactor_set_interrupt.sh
00:12:32.076     05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.076    05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.076     05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:12:32.076    05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:32.076    05:02:45 reactor_set_interrupt -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@34 -- # set -e
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@36 -- # shopt -s extglob
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:12:32.076     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@30 -- # CONFIG_UBLK=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:32.076      05:02:45 reactor_set_interrupt -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:32.077      05:02:45 reactor_set_interrupt -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:32.077     05:02:45 reactor_set_interrupt -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:32.077        05:02:45 reactor_set_interrupt -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:32.077       05:02:45 reactor_set_interrupt -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:12:32.077      05:02:46 reactor_set_interrupt -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:12:32.077  #define SPDK_CONFIG_H
00:12:32.077  #define SPDK_CONFIG_AIO_FSDEV 1
00:12:32.077  #define SPDK_CONFIG_APPS 1
00:12:32.077  #define SPDK_CONFIG_ARCH native
00:12:32.077  #define SPDK_CONFIG_ASAN 1
00:12:32.077  #undef SPDK_CONFIG_AVAHI
00:12:32.077  #undef SPDK_CONFIG_CET
00:12:32.077  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:12:32.077  #define SPDK_CONFIG_COVERAGE 1
00:12:32.077  #define SPDK_CONFIG_CROSS_PREFIX 
00:12:32.077  #undef SPDK_CONFIG_CRYPTO
00:12:32.077  #undef SPDK_CONFIG_CRYPTO_MLX5
00:12:32.077  #undef SPDK_CONFIG_CUSTOMOCF
00:12:32.077  #undef SPDK_CONFIG_DAOS
00:12:32.077  #define SPDK_CONFIG_DAOS_DIR 
00:12:32.077  #define SPDK_CONFIG_DEBUG 1
00:12:32.077  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:12:32.077  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:12:32.077  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:12:32.077  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:12:32.077  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:12:32.077  #undef SPDK_CONFIG_DPDK_UADK
00:12:32.077  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:32.077  #define SPDK_CONFIG_EXAMPLES 1
00:12:32.077  #undef SPDK_CONFIG_FC
00:12:32.077  #define SPDK_CONFIG_FC_PATH 
00:12:32.077  #define SPDK_CONFIG_FIO_PLUGIN 1
00:12:32.077  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:12:32.077  #define SPDK_CONFIG_FSDEV 1
00:12:32.077  #undef SPDK_CONFIG_FUSE
00:12:32.077  #undef SPDK_CONFIG_FUZZER
00:12:32.077  #define SPDK_CONFIG_FUZZER_LIB 
00:12:32.077  #undef SPDK_CONFIG_GOLANG
00:12:32.077  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:12:32.077  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:12:32.077  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:12:32.077  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:12:32.077  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:12:32.077  #undef SPDK_CONFIG_HAVE_LIBBSD
00:12:32.077  #undef SPDK_CONFIG_HAVE_LZ4
00:12:32.077  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:12:32.077  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:12:32.077  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:12:32.077  #define SPDK_CONFIG_IDXD 1
00:12:32.077  #undef SPDK_CONFIG_IDXD_KERNEL
00:12:32.077  #undef SPDK_CONFIG_IPSEC_MB
00:12:32.077  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:12:32.077  #define SPDK_CONFIG_ISAL 1
00:12:32.077  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:12:32.077  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:12:32.077  #define SPDK_CONFIG_LIBDIR 
00:12:32.077  #undef SPDK_CONFIG_LTO
00:12:32.077  #define SPDK_CONFIG_MAX_LCORES 128
00:12:32.077  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:12:32.077  #define SPDK_CONFIG_NVME_CUSE 1
00:12:32.077  #undef SPDK_CONFIG_OCF
00:12:32.077  #define SPDK_CONFIG_OCF_PATH 
00:12:32.077  #define SPDK_CONFIG_OPENSSL_PATH 
00:12:32.077  #undef SPDK_CONFIG_PGO_CAPTURE
00:12:32.077  #define SPDK_CONFIG_PGO_DIR 
00:12:32.077  #undef SPDK_CONFIG_PGO_USE
00:12:32.077  #define SPDK_CONFIG_PREFIX /usr/local
00:12:32.077  #undef SPDK_CONFIG_RAID5F
00:12:32.077  #undef SPDK_CONFIG_RBD
00:12:32.077  #define SPDK_CONFIG_RDMA 1
00:12:32.077  #define SPDK_CONFIG_RDMA_PROV verbs
00:12:32.077  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:12:32.077  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:12:32.077  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:12:32.077  #undef SPDK_CONFIG_SHARED
00:12:32.077  #undef SPDK_CONFIG_SMA
00:12:32.077  #define SPDK_CONFIG_TESTS 1
00:12:32.077  #undef SPDK_CONFIG_TSAN
00:12:32.077  #undef SPDK_CONFIG_UBLK
00:12:32.077  #define SPDK_CONFIG_UBSAN 1
00:12:32.077  #define SPDK_CONFIG_UNIT_TESTS 1
00:12:32.078  #undef SPDK_CONFIG_URING
00:12:32.078  #define SPDK_CONFIG_URING_PATH 
00:12:32.078  #undef SPDK_CONFIG_URING_ZNS
00:12:32.078  #undef SPDK_CONFIG_USDT
00:12:32.078  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:12:32.078  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:12:32.078  #undef SPDK_CONFIG_VFIO_USER
00:12:32.078  #define SPDK_CONFIG_VFIO_USER_DIR 
00:12:32.078  #define SPDK_CONFIG_VHOST 1
00:12:32.078  #define SPDK_CONFIG_VIRTIO 1
00:12:32.078  #undef SPDK_CONFIG_VTUNE
00:12:32.078  #define SPDK_CONFIG_VTUNE_DIR 
00:12:32.078  #define SPDK_CONFIG_WERROR 1
00:12:32.078  #define SPDK_CONFIG_WPDK_DIR 
00:12:32.078  #undef SPDK_CONFIG_XNVME
00:12:32.078  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:12:32.078      05:02:46 reactor_set_interrupt -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:12:32.078     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:32.078      05:02:46 reactor_set_interrupt -- scripts/common.sh@15 -- # shopt -s extglob
00:12:32.395      05:02:46 reactor_set_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:32.395      05:02:46 reactor_set_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:32.395      05:02:46 reactor_set_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:32.395       05:02:46 reactor_set_interrupt -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:32.395       05:02:46 reactor_set_interrupt -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:32.395       05:02:46 reactor_set_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:32.395       05:02:46 reactor_set_interrupt -- paths/export.sh@5 -- # export PATH
00:12:32.395       05:02:46 reactor_set_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:32.395        05:02:46 reactor_set_interrupt -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:32.395       05:02:46 reactor_set_interrupt -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:32.395       05:02:46 reactor_set_interrupt -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@64 -- # TEST_TAG=N/A
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:12:32.395       05:02:46 reactor_set_interrupt -- pm/common@68 -- # uname -s
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@68 -- # PM_OS=Linux
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@76 -- # SUDO[0]=
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@76 -- # SUDO[1]='sudo -E'
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@81 -- # [[ Linux == Linux ]]
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:12:32.395      05:02:46 reactor_set_interrupt -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@58 -- # : 1
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@62 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@64 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@66 -- # : 1
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@68 -- # : 1
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@70 -- # :
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@72 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@74 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@76 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@78 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@80 -- # : 1
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@82 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@84 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@86 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@88 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@90 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@92 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@94 -- # : 0
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:12:32.395     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@96 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@98 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@100 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@102 -- # : rdma
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@104 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@106 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@108 -- # : 1
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@110 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@112 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@114 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@116 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@118 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@120 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@122 -- # : 1
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@124 -- # : 1
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@128 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@130 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@132 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@134 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@136 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@138 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@140 -- # : main
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@142 -- # : true
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@144 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@146 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@148 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@150 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@152 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@154 -- # :
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@156 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@158 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@160 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@162 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@164 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@166 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@169 -- # :
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@171 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@173 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@175 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@177 -- # : 0
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:12:32.396     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@206 -- # cat
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@269 -- # _LCOV=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@275 -- # lcov_opt=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@279 -- # export valgrind=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@279 -- # valgrind=
00:12:32.397      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@285 -- # uname -s
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@289 -- # MAKE=make
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@309 -- # TEST_MODE=
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@331 -- # [[ -z 129774 ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@331 -- # kill -0 129774
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@344 -- # local mount target_dir
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:12:32.397      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zUkb0h
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.zUkb0h/tests/interrupt /tmp/spdk.zUkb0h
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.397      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@340 -- # df -T
00:12:32.397      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1248956416
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253683200
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4726784
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=8443342848
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=20616794112
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=12156674048
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=6264971264
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=6268399616
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=103061504
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:12:32.397     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=6334464
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=1253675008
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253679104
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=4096
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # avails["$mount"]=97115512832
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@376 -- # uses["$mount"]=2587267072
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:12:32.398  * Looking for test storage...
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@381 -- # local target_space new_size
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:12:32.398      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:12:32.398      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@385 -- # mount=/
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@387 -- # target_space=8443342848
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@394 -- # new_size=14371266560
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.398  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@402 -- # return 0
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1680 -- # set -o errtrace
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1681 -- # shopt -s extdebug
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1685 -- # true
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1687 -- # xtrace_fd
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@27 -- # exec
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@29 -- # exec
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@31 -- # xtrace_restore
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@18 -- # set -x
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:32.398      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:32.398      05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lcov --version
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@336 -- # IFS=.-:
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@336 -- # read -ra ver1
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@337 -- # IFS=.-:
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@337 -- # read -ra ver2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@338 -- # local 'op=<'
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@340 -- # ver1_l=2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@341 -- # ver2_l=1
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@344 -- # case "$op" in
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@345 -- # : 1
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@365 -- # decimal 1
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=1
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 1
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@365 -- # ver1[v]=1
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@366 -- # decimal 2
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@353 -- # local d=2
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:32.398      05:02:46 reactor_set_interrupt -- scripts/common.sh@355 -- # echo 2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@366 -- # ver2[v]=2
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:32.398     05:02:46 reactor_set_interrupt -- scripts/common.sh@368 -- # return 0
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:32.398  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.398  		--rc genhtml_branch_coverage=1
00:12:32.398  		--rc genhtml_function_coverage=1
00:12:32.398  		--rc genhtml_legend=1
00:12:32.398  		--rc geninfo_all_blocks=1
00:12:32.398  		--rc geninfo_unexecuted_blocks=1
00:12:32.398  		
00:12:32.398  		'
00:12:32.398     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:32.398  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.398  		--rc genhtml_branch_coverage=1
00:12:32.399  		--rc genhtml_function_coverage=1
00:12:32.399  		--rc genhtml_legend=1
00:12:32.399  		--rc geninfo_all_blocks=1
00:12:32.399  		--rc geninfo_unexecuted_blocks=1
00:12:32.399  		
00:12:32.399  		'
00:12:32.399     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:32.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.399  		--rc genhtml_branch_coverage=1
00:12:32.399  		--rc genhtml_function_coverage=1
00:12:32.399  		--rc genhtml_legend=1
00:12:32.399  		--rc geninfo_all_blocks=1
00:12:32.399  		--rc geninfo_unexecuted_blocks=1
00:12:32.399  		
00:12:32.399  		'
00:12:32.399     05:02:46 reactor_set_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:32.399  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:32.399  		--rc genhtml_branch_coverage=1
00:12:32.399  		--rc genhtml_function_coverage=1
00:12:32.399  		--rc genhtml_legend=1
00:12:32.399  		--rc geninfo_all_blocks=1
00:12:32.399  		--rc geninfo_unexecuted_blocks=1
00:12:32.399  		
00:12:32.399  		'
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:12:32.399    05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=129833
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:32.399   05:02:46 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 129833 /var/tmp/spdk.sock
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 129833 ']'
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:32.399  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:32.399   05:02:46 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:32.399  [2024-11-20 05:02:46.251905] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:32.399  [2024-11-20 05:02:46.252180] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129833 ]
00:12:32.658  [2024-11-20 05:02:46.416994] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:32.658  [2024-11-20 05:02:46.436150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:32.658  [2024-11-20 05:02:46.480996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:32.658  [2024-11-20 05:02:46.481128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:32.658  [2024-11-20 05:02:46.481137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:32.658  [2024-11-20 05:02:46.561009] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:33.595   05:02:47 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:33.595   05:02:47 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:12:33.595   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem
00:12:33.595   05:02:47 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:33.854  Malloc0
00:12:33.854  Malloc1
00:12:33.854  Malloc2
00:12:33.854   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio
00:12:33.854    05:02:47 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:12:33.854   05:02:47 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:33.854   05:02:47 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:33.854  5000+0 records in
00:12:33.854  5000+0 records out
00:12:33.854  10240000 bytes (10 MB, 9.8 MiB) copied, 0.0210647 s, 486 MB/s
00:12:33.854   05:02:47 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:34.113  AIO0
00:12:34.113   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 129833
00:12:34.113   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 129833 without_thd
00:12:34.113   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=129833
00:12:34.113   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd
00:12:34.113   05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:12:34.113    05:02:47 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:12:34.113    05:02:47 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:12:34.113    05:02:47 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:34.113    05:02:47 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:12:34.113    05:02:47 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:34.113     05:02:47 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:34.113     05:02:47 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:12:34.371   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:12:34.371    05:02:48 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:34.371     05:02:48 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:34.371     05:02:48 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:34.630    05:02:48 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:12:34.630  spdk_thread ids are 1 on reactor0.
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129833 0
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129833 0 idle
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:34.630   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:34.630    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:34.630    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129833 root      20   0   20.1t  82776  29264 S   0.0   0.7   0:00.33 reactor_0'
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129833 root 20 0 20.1t 82776 29264 S 0.0 0.7 0:00.33 reactor_0
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129833 1
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129833 1 idle
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129836 root      20   0   20.1t  82776  29264 S   0.0   0.7   0:00.00 reactor_1'
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129836 root 20 0 20.1t 82776 29264 S 0.0 0.7 0:00.00 reactor_1
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129833 2
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129833 2 idle
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:34.890   05:02:48 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:34.890    05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129837 root      20   0   20.1t  82776  29264 S   0.0   0.7   0:00.00 reactor_2'
00:12:35.149    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129837 root 20 0 20.1t 82776 29264 S 0.0 0.7 0:00.00 reactor_2
00:12:35.149    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:35.149    05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']'
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}"
00:12:35.149   05:02:48 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2
00:12:35.425  [2024-11-20 05:02:49.231438] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:35.425   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:12:35.684  [2024-11-20 05:02:49.499163] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:12:35.684  [2024-11-20 05:02:49.500234] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:35.684   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:12:35.942  [2024-11-20 05:02:49.774921] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:12:35.942  [2024-11-20 05:02:49.775713] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 129833 0
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 129833 0 busy
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:35.942   05:02:49 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:35.943   05:02:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:35.943   05:02:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:35.943    05:02:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:35.943    05:02:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129833 root      20   0   20.1t  82940  29272 R  99.9   0.7   0:00.78 reactor_0'
00:12:36.202    05:02:49 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129833 root 20 0 20.1t 82940 29272 R 99.9 0.7 0:00.78 reactor_0
00:12:36.202    05:02:49 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:36.202    05:02:49 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 129833 2
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 129833 2 busy
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:36.202   05:02:49 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:36.202    05:02:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:36.202    05:02:49 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129837 root      20   0   20.1t  82940  29272 R  99.9   0.7   0:00.34 reactor_2'
00:12:36.202    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129837 root 20 0 20.1t 82940 29272 R 99.9 0.7 0:00.34 reactor_2
00:12:36.202    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:36.202    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:36.202   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:12:36.461  [2024-11-20 05:02:50.395000] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:12:36.461  [2024-11-20 05:02:50.395646] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']'
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 129833 2
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129833 2 idle
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:36.461   05:02:50 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:36.461    05:02:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:36.461    05:02:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:36.719   05:02:50 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129837 root      20   0   20.1t  83012  29272 S   0.0   0.7   0:00.61 reactor_2'
00:12:36.719    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129837 root 20 0 20.1t 83012 29272 S 0.0 0.7 0:00.61 reactor_2
00:12:36.719    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:36.720    05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:36.720   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:12:36.978  [2024-11-20 05:02:50.835026] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:12:36.978  [2024-11-20 05:02:50.835840] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:36.978   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']'
00:12:36.978   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}"
00:12:36.978   05:02:50 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1
00:12:37.237  [2024-11-20 05:02:51.119155] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 129833 0
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129833 0 idle
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129833
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:37.237   05:02:51 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:37.237    05:02:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129833 -w 256
00:12:37.237    05:02:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129833 root      20   0   20.1t  83164  29272 S   0.0   0.7   0:01.67 reactor_0'
00:12:37.496    05:02:51 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129833 root 20 0 20.1t 83164 29272 S 0.0 0.7 0:01.67 reactor_0
00:12:37.496    05:02:51 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:37.496    05:02:51 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@77 -- # return 0
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT
00:12:37.496   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 129833
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 129833 ']'
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 129833
00:12:37.496    05:02:51 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:37.496    05:02:51 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129833
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:37.496  killing process with pid 129833
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129833'
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 129833
00:12:37.496   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 129833
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=129981
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:37.755   05:02:51 reactor_set_interrupt -- interrupt/interrupt_common.sh@26 -- # waitforlisten 129981 /var/tmp/spdk.sock
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@835 -- # '[' -z 129981 ']'
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:37.755  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:37.755   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:37.755  [2024-11-20 05:02:51.635535] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:37.755  [2024-11-20 05:02:51.635792] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid129981 ]
00:12:38.014  [2024-11-20 05:02:51.802953] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:38.014  [2024-11-20 05:02:51.821566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:38.014  [2024-11-20 05:02:51.854439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:38.014  [2024-11-20 05:02:51.854573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:38.014  [2024-11-20 05:02:51.854588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:38.014  [2024-11-20 05:02:51.935029] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:38.014   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:38.014   05:02:51 reactor_set_interrupt -- common/autotest_common.sh@868 -- # return 0
00:12:38.014   05:02:51 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem
00:12:38.014   05:02:51 reactor_set_interrupt -- interrupt/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:38.581  Malloc0
00:12:38.581  Malloc1
00:12:38.581  Malloc2
00:12:38.581   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio
00:12:38.581    05:02:52 reactor_set_interrupt -- interrupt/common.sh@77 -- # uname -s
00:12:38.581   05:02:52 reactor_set_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:38.582   05:02:52 reactor_set_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:38.582  5000+0 records in
00:12:38.582  5000+0 records out
00:12:38.582  10240000 bytes (10 MB, 9.8 MiB) copied, 0.027047 s, 379 MB/s
00:12:38.582   05:02:52 reactor_set_interrupt -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:38.841  AIO0
00:12:38.841   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 129981
00:12:38.841   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 129981
00:12:38.841   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=129981
00:12:38.841   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=
00:12:38.841   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask))
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x1
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=1
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:38.841     05:02:52 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:38.841     05:02:52 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:38.841    05:02:52 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo 1
00:12:39.099   05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask))
00:12:39.099    05:02:52 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4
00:12:39.099    05:02:52 reactor_set_interrupt -- interrupt/common.sh@57 -- # local reactor_cpumask=0x4
00:12:39.099    05:02:52 reactor_set_interrupt -- interrupt/common.sh@58 -- # local grep_str
00:12:39.099    05:02:52 reactor_set_interrupt -- interrupt/common.sh@60 -- # reactor_cpumask=4
00:12:39.099    05:02:52 reactor_set_interrupt -- interrupt/common.sh@61 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:39.099     05:02:52 reactor_set_interrupt -- interrupt/common.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py thread_get_stats
00:12:39.099     05:02:52 reactor_set_interrupt -- interrupt/common.sh@64 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id'
00:12:39.099    05:02:53 reactor_set_interrupt -- interrupt/common.sh@64 -- # echo ''
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]]
00:12:39.099  spdk_thread ids are 1 on reactor0.
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.'
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129981 0
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129981 0 idle
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:39.099   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:39.100   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:39.100   05:02:53 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:39.100   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:39.100   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:39.100    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:39.100    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129981 root      20   0   20.1t  84112  29284 S   0.0   0.7   0:00.29 reactor_0'
00:12:39.359    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129981 root 20 0 20.1t 84112 29284 S 0.0 0.7 0:00.29 reactor_0
00:12:39.359    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:39.359    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129981 1
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129981 1 idle
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=1
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:39.359   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:39.359    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:39.359    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_1
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129984 root      20   0   20.1t  84112  29284 S   0.0   0.7   0:00.00 reactor_1'
00:12:39.618    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129984 root 20 0 20.1t 84112 29284 S 0.0 0.7 0:00.00 reactor_1
00:12:39.618    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:39.618    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2}
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 129981 2
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129981 2 idle
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:39.618   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:39.619    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:39.619    05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129985 root      20   0   20.1t  84112  29284 S   0.0   0.7   0:00.00 reactor_2'
00:12:39.619    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129985 root 20 0 20.1t 84112 29284 S 0.0 0.7 0:00.00 reactor_2
00:12:39.619    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:39.619    05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']'
00:12:39.619   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d
00:12:40.009  [2024-11-20 05:02:53.772172] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0.
00:12:40.010  [2024-11-20 05:02:53.773100] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode.
00:12:40.010  [2024-11-20 05:02:53.773580] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:40.010   05:02:53 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d
00:12:40.299  [2024-11-20 05:02:54.032030] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2.
00:12:40.299  [2024-11-20 05:02:54.032621] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 129981 0
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 129981 0 busy
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129981 root      20   0   20.1t  84188  29284 R  99.9   0.7   0:00.73 reactor_0'
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129981 root 20 0 20.1t 84188 29284 R 99.9 0.7 0:00.73 reactor_0
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 129981 2
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 129981 2 busy
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=busy
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]]
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:40.299   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:40.299    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129985 root      20   0   20.1t  84188  29284 R  99.9   0.7   0:00.34 reactor_2'
00:12:40.558    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129985 root 20 0 20.1t 84188 29284 R 99.9 0.7 0:00.34 reactor_2
00:12:40.558    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:40.558    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]]
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold ))
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]]
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:40.558   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2
00:12:40.816  [2024-11-20 05:02:54.640364] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2.
00:12:40.816  [2024-11-20 05:02:54.640737] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']'
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 129981 2
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129981 2 idle
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=2
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:40.816   05:02:54 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:40.816    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:40.816    05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_2
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129985 root      20   0   20.1t  84188  29284 S   0.0   0.7   0:00.60 reactor_2'
00:12:41.075    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129985 root 20 0 20.1t 84188 29284 S 0.0 0.7 0:00.60 reactor_2
00:12:41.075    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:41.075    05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:41.075   05:02:54 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0
00:12:41.075  [2024-11-20 05:02:55.020413] interrupt_tgt.c:  99:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0.
00:12:41.075  [2024-11-20 05:02:55.020991] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode.
00:12:41.075  [2024-11-20 05:02:55.021203] interrupt_tgt.c:  36:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']'
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 129981 0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 129981 0 idle
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@10 -- # local pid=129981
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@11 -- # local idx=0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@12 -- # local state=idle
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]]
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]]
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@20 -- # hash top
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j = 10 ))
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@25 -- # (( j != 0 ))
00:12:41.335    05:02:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 129981 -w 256
00:12:41.335    05:02:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # grep reactor_0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 129981 root      20   0   20.1t  84376  29284 S   0.0   0.7   0:01.55 reactor_0'
00:12:41.335    05:02:55 reactor_set_interrupt -- interrupt/common.sh@27 -- # echo 129981 root 20 0 20.1t 84376 29284 S 0.0 0.7 0:01.55 reactor_0
00:12:41.335    05:02:55 reactor_set_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g'
00:12:41.335    05:02:55 reactor_set_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}'
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]]
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]]
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold ))
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/common.sh@35 -- # return 0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@72 -- # return 0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@82 -- # return 0
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT
00:12:41.335   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 129981
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@954 -- # '[' -z 129981 ']'
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@958 -- # kill -0 129981
00:12:41.335    05:02:55 reactor_set_interrupt -- common/autotest_common.sh@959 -- # uname
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:41.335    05:02:55 reactor_set_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 129981
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:41.335  killing process with pid 129981
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 129981'
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@973 -- # kill 129981
00:12:41.335   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@978 -- # wait 129981
00:12:41.594   05:02:55 reactor_set_interrupt -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup
00:12:41.594   05:02:55 reactor_set_interrupt -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:41.594  
00:12:41.594  real	0m9.692s
00:12:41.594  user	0m10.052s
00:12:41.594  sys	0m1.559s
00:12:41.594   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:41.594   05:02:55 reactor_set_interrupt -- common/autotest_common.sh@10 -- # set +x
00:12:41.594  ************************************
00:12:41.594  END TEST reactor_set_interrupt
00:12:41.594  ************************************
00:12:41.594   05:02:55  -- spdk/autotest.sh@184 -- # run_test reap_unregistered_poller /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:41.594   05:02:55  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:41.594   05:02:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:41.594   05:02:55  -- common/autotest_common.sh@10 -- # set +x
00:12:41.594  ************************************
00:12:41.594  START TEST reap_unregistered_poller
00:12:41.594  ************************************
00:12:41.594   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:41.855  * Looking for test storage...
00:12:41.855  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:41.855     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version
00:12:41.855     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:41.855     05:02:55 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:41.855    05:02:55 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:41.855  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:41.855  		--rc genhtml_branch_coverage=1
00:12:41.855  		--rc genhtml_function_coverage=1
00:12:41.855  		--rc genhtml_legend=1
00:12:41.855  		--rc geninfo_all_blocks=1
00:12:41.855  		--rc geninfo_unexecuted_blocks=1
00:12:41.855  		
00:12:41.855  		'
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:41.855  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:41.855  		--rc genhtml_branch_coverage=1
00:12:41.855  		--rc genhtml_function_coverage=1
00:12:41.855  		--rc genhtml_legend=1
00:12:41.855  		--rc geninfo_all_blocks=1
00:12:41.855  		--rc geninfo_unexecuted_blocks=1
00:12:41.855  		
00:12:41.855  		'
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:41.855  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:41.855  		--rc genhtml_branch_coverage=1
00:12:41.855  		--rc genhtml_function_coverage=1
00:12:41.855  		--rc genhtml_legend=1
00:12:41.855  		--rc geninfo_all_blocks=1
00:12:41.855  		--rc geninfo_unexecuted_blocks=1
00:12:41.855  		
00:12:41.855  		'
00:12:41.855    05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:41.855  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:41.855  		--rc genhtml_branch_coverage=1
00:12:41.855  		--rc genhtml_function_coverage=1
00:12:41.855  		--rc genhtml_legend=1
00:12:41.855  		--rc geninfo_all_blocks=1
00:12:41.855  		--rc geninfo_unexecuted_blocks=1
00:12:41.855  		
00:12:41.855  		'
00:12:41.855   05:02:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/interrupt_common.sh
00:12:41.855      05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # dirname /home/vagrant/spdk_repo/spdk/test/interrupt/reap_unregistered_poller.sh
00:12:41.855     05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:41.855    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@5 -- # testdir=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:41.856     05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/interrupt/../..
00:12:41.856    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@6 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:12:41.856    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@34 -- # set -e
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@35 -- # shopt -s nullglob
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@36 -- # shopt -s extglob
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@2 -- # CONFIG_ASAN=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@5 -- # CONFIG_USDT=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@8 -- # CONFIG_RBD=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@9 -- # CONFIG_LIBDIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@10 -- # CONFIG_IDXD=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@12 -- # CONFIG_SMA=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@13 -- # CONFIG_VTUNE=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@14 -- # CONFIG_TSAN=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@21 -- # CONFIG_LTO=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@23 -- # CONFIG_CET=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@25 -- # CONFIG_OCF_PATH=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@30 -- # CONFIG_UBLK=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@33 -- # CONFIG_OCF=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@34 -- # CONFIG_FUSE=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@37 -- # CONFIG_FUZZER=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@38 -- # CONFIG_FSDEV=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@42 -- # CONFIG_VHOST=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@43 -- # CONFIG_DAOS=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@51 -- # CONFIG_RDMA=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@55 -- # CONFIG_URING_PATH=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@56 -- # CONFIG_XNVME=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@58 -- # CONFIG_ARCH=native
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@61 -- # CONFIG_WERROR=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@63 -- # CONFIG_UBSAN=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@66 -- # CONFIG_GOLANG=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@67 -- # CONFIG_ISAL=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@71 -- # CONFIG_APPS=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@72 -- # CONFIG_SHARED=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@74 -- # CONFIG_FC_PATH=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@76 -- # CONFIG_FC=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@77 -- # CONFIG_AVAHI=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@79 -- # CONFIG_RAID5F=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@81 -- # CONFIG_TESTS=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@85 -- # CONFIG_PGO_DIR=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@86 -- # CONFIG_DEBUG=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX=
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y
00:12:41.856      05:02:55 reap_unregistered_poller -- common/build_config.sh@90 -- # CONFIG_URING=n
00:12:41.856     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:41.856        05:02:55 reap_unregistered_poller -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:12:41.856       05:02:55 reap_unregistered_poller -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt")
00:12:41.856      05:02:55 reap_unregistered_poller -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H
00:12:41.857  #define SPDK_CONFIG_H
00:12:41.857  #define SPDK_CONFIG_AIO_FSDEV 1
00:12:41.857  #define SPDK_CONFIG_APPS 1
00:12:41.857  #define SPDK_CONFIG_ARCH native
00:12:41.857  #define SPDK_CONFIG_ASAN 1
00:12:41.857  #undef SPDK_CONFIG_AVAHI
00:12:41.857  #undef SPDK_CONFIG_CET
00:12:41.857  #define SPDK_CONFIG_COPY_FILE_RANGE 1
00:12:41.857  #define SPDK_CONFIG_COVERAGE 1
00:12:41.857  #define SPDK_CONFIG_CROSS_PREFIX 
00:12:41.857  #undef SPDK_CONFIG_CRYPTO
00:12:41.857  #undef SPDK_CONFIG_CRYPTO_MLX5
00:12:41.857  #undef SPDK_CONFIG_CUSTOMOCF
00:12:41.857  #undef SPDK_CONFIG_DAOS
00:12:41.857  #define SPDK_CONFIG_DAOS_DIR 
00:12:41.857  #define SPDK_CONFIG_DEBUG 1
00:12:41.857  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:12:41.857  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build
00:12:41.857  #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include
00:12:41.857  #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib
00:12:41.857  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:12:41.857  #undef SPDK_CONFIG_DPDK_UADK
00:12:41.857  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:12:41.857  #define SPDK_CONFIG_EXAMPLES 1
00:12:41.857  #undef SPDK_CONFIG_FC
00:12:41.857  #define SPDK_CONFIG_FC_PATH 
00:12:41.857  #define SPDK_CONFIG_FIO_PLUGIN 1
00:12:41.857  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:12:41.857  #define SPDK_CONFIG_FSDEV 1
00:12:41.857  #undef SPDK_CONFIG_FUSE
00:12:41.857  #undef SPDK_CONFIG_FUZZER
00:12:41.857  #define SPDK_CONFIG_FUZZER_LIB 
00:12:41.857  #undef SPDK_CONFIG_GOLANG
00:12:41.857  #undef SPDK_CONFIG_HAVE_ARC4RANDOM
00:12:41.857  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:12:41.857  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:12:41.857  #define SPDK_CONFIG_HAVE_KEYUTILS 1
00:12:41.857  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:12:41.857  #undef SPDK_CONFIG_HAVE_LIBBSD
00:12:41.857  #undef SPDK_CONFIG_HAVE_LZ4
00:12:41.857  #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1
00:12:41.857  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:12:41.857  #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1
00:12:41.857  #define SPDK_CONFIG_IDXD 1
00:12:41.857  #undef SPDK_CONFIG_IDXD_KERNEL
00:12:41.857  #undef SPDK_CONFIG_IPSEC_MB
00:12:41.857  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:12:41.857  #define SPDK_CONFIG_ISAL 1
00:12:41.857  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:12:41.857  #define SPDK_CONFIG_ISCSI_INITIATOR 1
00:12:41.857  #define SPDK_CONFIG_LIBDIR 
00:12:41.857  #undef SPDK_CONFIG_LTO
00:12:41.857  #define SPDK_CONFIG_MAX_LCORES 128
00:12:41.857  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:12:41.857  #define SPDK_CONFIG_NVME_CUSE 1
00:12:41.857  #undef SPDK_CONFIG_OCF
00:12:41.857  #define SPDK_CONFIG_OCF_PATH 
00:12:41.857  #define SPDK_CONFIG_OPENSSL_PATH 
00:12:41.857  #undef SPDK_CONFIG_PGO_CAPTURE
00:12:41.857  #define SPDK_CONFIG_PGO_DIR 
00:12:41.857  #undef SPDK_CONFIG_PGO_USE
00:12:41.857  #define SPDK_CONFIG_PREFIX /usr/local
00:12:41.857  #undef SPDK_CONFIG_RAID5F
00:12:41.857  #undef SPDK_CONFIG_RBD
00:12:41.857  #define SPDK_CONFIG_RDMA 1
00:12:41.857  #define SPDK_CONFIG_RDMA_PROV verbs
00:12:41.857  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:12:41.857  #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1
00:12:41.857  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:12:41.857  #undef SPDK_CONFIG_SHARED
00:12:41.857  #undef SPDK_CONFIG_SMA
00:12:41.857  #define SPDK_CONFIG_TESTS 1
00:12:41.857  #undef SPDK_CONFIG_TSAN
00:12:41.857  #undef SPDK_CONFIG_UBLK
00:12:41.857  #define SPDK_CONFIG_UBSAN 1
00:12:41.857  #define SPDK_CONFIG_UNIT_TESTS 1
00:12:41.857  #undef SPDK_CONFIG_URING
00:12:41.857  #define SPDK_CONFIG_URING_PATH 
00:12:41.857  #undef SPDK_CONFIG_URING_ZNS
00:12:41.857  #undef SPDK_CONFIG_USDT
00:12:41.857  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:12:41.857  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:12:41.857  #undef SPDK_CONFIG_VFIO_USER
00:12:41.857  #define SPDK_CONFIG_VFIO_USER_DIR 
00:12:41.857  #define SPDK_CONFIG_VHOST 1
00:12:41.857  #define SPDK_CONFIG_VIRTIO 1
00:12:41.857  #undef SPDK_CONFIG_VTUNE
00:12:41.857  #define SPDK_CONFIG_VTUNE_DIR 
00:12:41.857  #define SPDK_CONFIG_WERROR 1
00:12:41.857  #define SPDK_CONFIG_WPDK_DIR 
00:12:41.857  #undef SPDK_CONFIG_XNVME
00:12:41.857  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS ))
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:41.857      05:02:55 reap_unregistered_poller -- scripts/common.sh@15 -- # shopt -s extglob
00:12:41.857      05:02:55 reap_unregistered_poller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:41.857       05:02:55 reap_unregistered_poller -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:41.857       05:02:55 reap_unregistered_poller -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:41.857       05:02:55 reap_unregistered_poller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:41.857       05:02:55 reap_unregistered_poller -- paths/export.sh@5 -- # export PATH
00:12:41.857       05:02:55 reap_unregistered_poller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:41.857        05:02:55 reap_unregistered_poller -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:12:41.857       05:02:55 reap_unregistered_poller -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:12:41.857       05:02:55 reap_unregistered_poller -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@64 -- # TEST_TAG=N/A
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:12:41.857       05:02:55 reap_unregistered_poller -- pm/common@68 -- # uname -s
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@68 -- # PM_OS=Linux
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=()
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[0]=
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@76 -- # SUDO[1]='sudo -E'
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@79 -- # [[ Linux == FreeBSD ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@81 -- # [[ Linux == Linux ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@81 -- # [[ QEMU != QEMU ]]
00:12:41.857      05:02:55 reap_unregistered_poller -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@58 -- # : 1
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@62 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@64 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@66 -- # : 1
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@68 -- # : 1
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@70 -- # :
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@72 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@74 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@76 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@78 -- # : 0
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR
00:12:41.857     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@80 -- # : 1
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@82 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@84 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@86 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@88 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@90 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@92 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@94 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@96 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@98 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@100 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@102 -- # : rdma
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@104 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@106 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@108 -- # : 1
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@110 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@112 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@114 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@116 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@118 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@120 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@122 -- # : 1
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@124 -- # : 1
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@126 -- # : /home/vagrant/spdk_repo/dpdk/build
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@128 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@130 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@132 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@134 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@136 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@138 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@140 -- # : main
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@142 -- # : true
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@144 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@146 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@148 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@150 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@152 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@154 -- # :
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@156 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@158 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@160 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@162 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@164 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@166 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@169 -- # :
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@171 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@173 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@175 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@177 -- # : 0
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes
00:12:41.858     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@206 -- # cat
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']'
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@259 -- # export QEMU_BIN=
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@259 -- # QEMU_BIN=
00:12:41.859     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@260 -- # export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@269 -- # _LCOV=
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]=
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@275 -- # lcov_opt=
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']'
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@279 -- # export valgrind=
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@279 -- # valgrind=
00:12:42.119      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@285 -- # uname -s
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']'
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@286 -- # HUGEMEM=4096
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@289 -- # MAKE=make
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@306 -- # export HUGEMEM=4096
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@306 -- # HUGEMEM=4096
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@308 -- # NO_HUGE=()
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@309 -- # TEST_MODE=
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@331 -- # [[ -z 130142 ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@331 -- # kill -0 130142
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@341 -- # [[ -v testdir ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@343 -- # local requested_size=2147483648
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@344 -- # local mount target_dir
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@347 -- # local source fs size avail mount use
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates
00:12:42.119      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.voz6kV
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@358 -- # [[ -n '' ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@363 -- # [[ -n '' ]]
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/interrupt /tmp/spdk.voz6kV/tests/interrupt /tmp/spdk.voz6kV
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@371 -- # requested_size=2214592512
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.119      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@340 -- # df -T
00:12:42.119      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@340 -- # grep -v Filesystem
00:12:42.119     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1248956416
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253683200
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4726784
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda1
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=8443301888
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=20616794112
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=12156715008
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=6264971264
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=6268399616
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=3428352
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=5242880
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=5242880
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=0
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda15
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=103061504
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=109395968
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=6334464
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=1253675008
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253679104
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=4096
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/ubuntu22-vg-autotest/ubuntu2204-libvirt/output
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # avails["$mount"]=97116643328
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@376 -- # uses["$mount"]=2586136576
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n'
00:12:42.120  * Looking for test storage...
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@381 -- # local target_space new_size
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}"
00:12:42.120      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:42.120      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}'
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@385 -- # mount=/
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@387 -- # target_space=8443301888
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size ))
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@391 -- # (( target_space >= requested_size ))
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == tmpfs ]]
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ ext4 == ramfs ]]
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@393 -- # [[ / == / ]]
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@394 -- # new_size=14371307520
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 ))
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/interrupt
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:42.120  * Found test storage at /home/vagrant/spdk_repo/spdk/test/interrupt
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@402 -- # return 0
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1680 -- # set -o errtrace
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1681 -- # shopt -s extdebug
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1685 -- # true
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1687 -- # xtrace_fd
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -n 13 ]]
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]]
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@27 -- # exec
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@29 -- # exec
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@31 -- # xtrace_restore
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@18 -- # set -x
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:42.120      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lcov --version
00:12:42.120      05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@336 -- # IFS=.-:
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@336 -- # read -ra ver1
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@337 -- # IFS=.-:
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@337 -- # read -ra ver2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@338 -- # local 'op=<'
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@340 -- # ver1_l=2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@341 -- # ver2_l=1
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@344 -- # case "$op" in
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@345 -- # : 1
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@365 -- # decimal 1
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=1
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 1
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@365 -- # ver1[v]=1
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@366 -- # decimal 2
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@353 -- # local d=2
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:42.120      05:02:55 reap_unregistered_poller -- scripts/common.sh@355 -- # echo 2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@366 -- # ver2[v]=2
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:42.120     05:02:55 reap_unregistered_poller -- scripts/common.sh@368 -- # return 0
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:42.120  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.120  		--rc genhtml_branch_coverage=1
00:12:42.120  		--rc genhtml_function_coverage=1
00:12:42.120  		--rc genhtml_legend=1
00:12:42.120  		--rc geninfo_all_blocks=1
00:12:42.120  		--rc geninfo_unexecuted_blocks=1
00:12:42.120  		
00:12:42.120  		'
00:12:42.120     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:42.120  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.120  		--rc genhtml_branch_coverage=1
00:12:42.120  		--rc genhtml_function_coverage=1
00:12:42.120  		--rc genhtml_legend=1
00:12:42.121  		--rc geninfo_all_blocks=1
00:12:42.121  		--rc geninfo_unexecuted_blocks=1
00:12:42.121  		
00:12:42.121  		'
00:12:42.121     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:42.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.121  		--rc genhtml_branch_coverage=1
00:12:42.121  		--rc genhtml_function_coverage=1
00:12:42.121  		--rc genhtml_legend=1
00:12:42.121  		--rc geninfo_all_blocks=1
00:12:42.121  		--rc geninfo_unexecuted_blocks=1
00:12:42.121  		
00:12:42.121  		'
00:12:42.121     05:02:55 reap_unregistered_poller -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:42.121  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:42.121  		--rc genhtml_branch_coverage=1
00:12:42.121  		--rc genhtml_function_coverage=1
00:12:42.121  		--rc genhtml_legend=1
00:12:42.121  		--rc geninfo_all_blocks=1
00:12:42.121  		--rc geninfo_unexecuted_blocks=1
00:12:42.121  		
00:12:42.121  		'
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/interrupt/common.sh
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@12 -- # r0_mask=0x1
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@13 -- # r1_mask=0x2
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@14 -- # r2_mask=0x4
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@16 -- # cpu_server_mask=0x07
00:12:42.121    05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@17 -- # rpc_server_addr=/var/tmp/spdk.sock
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/examples/interrupt_tgt
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@20 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@21 -- # local cpu_mask=0x07
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@24 -- # intr_tgt_pid=130205
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@25 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT
00:12:42.121   05:02:55 reap_unregistered_poller -- interrupt/interrupt_common.sh@26 -- # waitforlisten 130205 /var/tmp/spdk.sock
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@835 -- # '[' -z 130205 ']'
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@840 -- # local max_retries=100
00:12:42.121  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@844 -- # xtrace_disable
00:12:42.121   05:02:55 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:42.121  [2024-11-20 05:02:55.996532] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:42.121  [2024-11-20 05:02:55.996888] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130205 ]
00:12:42.379  [2024-11-20 05:02:56.162745] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:42.379  [2024-11-20 05:02:56.181521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:12:42.379  [2024-11-20 05:02:56.215108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:12:42.379  [2024-11-20 05:02:56.215256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:42.379  [2024-11-20 05:02:56.215258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:12:42.379  [2024-11-20 05:02:56.293817] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode.
00:12:42.380   05:02:56 reap_unregistered_poller -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:12:42.380   05:02:56 reap_unregistered_poller -- common/autotest_common.sh@868 -- # return 0
00:12:42.380    05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers
00:12:42.380    05:02:56 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:42.380    05:02:56 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:42.380    05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]'
00:12:42.380    05:02:56 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{
00:12:42.638    "name": "app_thread",
00:12:42.638    "id": 1,
00:12:42.638    "active_pollers": [],
00:12:42.638    "timed_pollers": [
00:12:42.638      {
00:12:42.638        "name": "rpc_subsystem_poll_servers",
00:12:42.638        "id": 1,
00:12:42.638        "state": "waiting",
00:12:42.638        "run_count": 0,
00:12:42.638        "busy_count": 0,
00:12:42.638        "period_ticks": 8800000
00:12:42.638      }
00:12:42.638    ],
00:12:42.638    "paused_pollers": []
00:12:42.638  }'
00:12:42.638    05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name'
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers=
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' '
00:12:42.638    05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name'
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll_servers
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio
00:12:42.638    05:02:56 reap_unregistered_poller -- interrupt/common.sh@77 -- # uname -s
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]]
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/interrupt/aiofile bs=2048 count=5000
00:12:42.638  5000+0 records in
00:12:42.638  5000+0 records out
00:12:42.638  10240000 bytes (10 MB, 9.8 MiB) copied, 0.024149 s, 424 MB/s
00:12:42.638   05:02:56 reap_unregistered_poller -- interrupt/common.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile AIO0 2048
00:12:42.897  AIO0
00:12:42.897   05:02:56 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine
00:12:43.156   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1
00:12:43.415    05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers
00:12:43.415    05:02:57 reap_unregistered_poller -- common/autotest_common.sh@563 -- # xtrace_disable
00:12:43.415    05:02:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:43.415    05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]'
00:12:43.415    05:02:57 reap_unregistered_poller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{
00:12:43.415    "name": "app_thread",
00:12:43.415    "id": 1,
00:12:43.415    "active_pollers": [],
00:12:43.415    "timed_pollers": [
00:12:43.415      {
00:12:43.415        "name": "rpc_subsystem_poll_servers",
00:12:43.415        "id": 1,
00:12:43.415        "state": "waiting",
00:12:43.415        "run_count": 0,
00:12:43.415        "busy_count": 0,
00:12:43.415        "period_ticks": 8800000
00:12:43.415      }
00:12:43.415    ],
00:12:43.415    "paused_pollers": []
00:12:43.415  }'
00:12:43.415    05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name'
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers=
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' '
00:12:43.415    05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name'
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll_servers
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@44 -- # [[  rpc_subsystem_poll_servers == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l\_\s\e\r\v\e\r\s ]]
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT
00:12:43.415   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 130205
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@954 -- # '[' -z 130205 ']'
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@958 -- # kill -0 130205
00:12:43.415    05:02:57 reap_unregistered_poller -- common/autotest_common.sh@959 -- # uname
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:12:43.415    05:02:57 reap_unregistered_poller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 130205
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:12:43.415  killing process with pid 130205
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 130205'
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@973 -- # kill 130205
00:12:43.415   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@978 -- # wait 130205
00:12:43.674   05:02:57 reap_unregistered_poller -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup
00:12:43.674   05:02:57 reap_unregistered_poller -- interrupt/common.sh@6 -- # rm -f /home/vagrant/spdk_repo/spdk/test/interrupt/aiofile
00:12:43.674  
00:12:43.674  real	0m2.066s
00:12:43.674  user	0m1.707s
00:12:43.674  sys	0m0.459s
00:12:43.674   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:43.674   05:02:57 reap_unregistered_poller -- common/autotest_common.sh@10 -- # set +x
00:12:43.674  ************************************
00:12:43.674  END TEST reap_unregistered_poller
00:12:43.674  ************************************
00:12:43.933   05:02:57  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:12:43.933    05:02:57  -- spdk/autotest.sh@194 -- # uname -s
00:12:43.933   05:02:57  -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]]
00:12:43.933   05:02:57  -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]]
00:12:43.933   05:02:57  -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]]
00:12:43.933   05:02:57  -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:12:43.933   05:02:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:12:43.933   05:02:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:43.933   05:02:57  -- common/autotest_common.sh@10 -- # set +x
00:12:43.933  ************************************
00:12:43.933  START TEST spdk_dd
00:12:43.933  ************************************
00:12:43.933   05:02:57 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh
00:12:43.933  * Looking for test storage...
00:12:43.933  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:43.933     05:02:57 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:43.933      05:02:57 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version
00:12:43.933      05:02:57 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:43.933     05:02:57 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@336 -- # IFS=.-:
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@337 -- # IFS=.-:
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@338 -- # local 'op=<'
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@344 -- # case "$op" in
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@345 -- # : 1
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@365 -- # decimal 1
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@353 -- # local d=1
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@355 -- # echo 1
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@366 -- # decimal 2
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@353 -- # local d=2
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:43.933      05:02:57 spdk_dd -- scripts/common.sh@355 -- # echo 2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:43.933     05:02:57 spdk_dd -- scripts/common.sh@368 -- # return 0
00:12:43.933     05:02:57 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:43.934     05:02:57 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:43.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:43.934  		--rc genhtml_branch_coverage=1
00:12:43.934  		--rc genhtml_function_coverage=1
00:12:43.934  		--rc genhtml_legend=1
00:12:43.934  		--rc geninfo_all_blocks=1
00:12:43.934  		--rc geninfo_unexecuted_blocks=1
00:12:43.934  		
00:12:43.934  		'
00:12:43.934     05:02:57 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:43.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:43.934  		--rc genhtml_branch_coverage=1
00:12:43.934  		--rc genhtml_function_coverage=1
00:12:43.934  		--rc genhtml_legend=1
00:12:43.934  		--rc geninfo_all_blocks=1
00:12:43.934  		--rc geninfo_unexecuted_blocks=1
00:12:43.934  		
00:12:43.934  		'
00:12:43.934     05:02:57 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:43.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:43.934  		--rc genhtml_branch_coverage=1
00:12:43.934  		--rc genhtml_function_coverage=1
00:12:43.934  		--rc genhtml_legend=1
00:12:43.934  		--rc geninfo_all_blocks=1
00:12:43.934  		--rc geninfo_unexecuted_blocks=1
00:12:43.934  		
00:12:43.934  		'
00:12:43.934     05:02:57 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:43.934  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:43.934  		--rc genhtml_branch_coverage=1
00:12:43.934  		--rc genhtml_function_coverage=1
00:12:43.934  		--rc genhtml_legend=1
00:12:43.934  		--rc geninfo_all_blocks=1
00:12:43.934  		--rc geninfo_unexecuted_blocks=1
00:12:43.934  		
00:12:43.934  		'
00:12:43.934    05:02:57 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:43.934     05:02:57 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob
00:12:43.934     05:02:57 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:43.934     05:02:57 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:43.934     05:02:57 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:43.934      05:02:57 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:43.934      05:02:57 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:43.934      05:02:57 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:43.934      05:02:57 spdk_dd -- paths/export.sh@5 -- # export PATH
00:12:43.934      05:02:57 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:43.934   05:02:57 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:12:44.193  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:12:44.452  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:12:45.388   05:02:59 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace))
00:12:45.388    05:02:59 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@313 -- # local nvmes
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]]
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@298 -- # local bdf=
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@233 -- # local class
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@234 -- # local subclass
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@235 -- # local progif
00:12:45.388       05:02:59 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@236 -- # class=01
00:12:45.388       05:02:59 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@237 -- # subclass=08
00:12:45.388       05:02:59 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@238 -- # progif=02
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@240 -- # hash lspci
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:12:45.388      05:02:59 spdk_dd -- scripts/common.sh@245 -- # tr -d '"'
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@18 -- # local i
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]]
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@27 -- # return 0
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:12:45.388     05:02:59 spdk_dd -- scripts/common.sh@323 -- # uname -s
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@328 -- # (( 1 ))
00:12:45.388    05:02:59 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:12:45.388   05:02:59 spdk_dd -- dd/dd.sh@13 -- # check_liburing
00:12:45.388   05:02:59 spdk_dd -- dd/common.sh@139 -- # local lib
00:12:45.388   05:02:59 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0
00:12:45.388   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.388    05:02:59 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:45.388    05:02:59 spdk_dd -- dd/common.sh@137 -- # grep NEEDED
00:12:45.647   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.6 == liburing.so.* ]]
00:12:45.647   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.647   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libnuma.so.1 == liburing.so.* ]]
00:12:45.647   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.647   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libuuid.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libssl.so.3 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libcrypto.so.3 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libm.so.6 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libkeyutils.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libiscsi.so.7 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]]
00:12:45.648   05:02:59 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _
00:12:45.648   05:02:59 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 ))
00:12:45.648   05:02:59 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:12:45.648   05:02:59 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:45.648   05:02:59 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:45.648   05:02:59 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:12:45.648  ************************************
00:12:45.648  START TEST spdk_dd_basic_rw
00:12:45.648  ************************************
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0
00:12:45.648  * Looking for test storage...
00:12:45.648  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-:
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-:
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 ))
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:12:45.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:45.648  		--rc genhtml_branch_coverage=1
00:12:45.648  		--rc genhtml_function_coverage=1
00:12:45.648  		--rc genhtml_legend=1
00:12:45.648  		--rc geninfo_all_blocks=1
00:12:45.648  		--rc geninfo_unexecuted_blocks=1
00:12:45.648  		
00:12:45.648  		'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:12:45.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:45.648  		--rc genhtml_branch_coverage=1
00:12:45.648  		--rc genhtml_function_coverage=1
00:12:45.648  		--rc genhtml_legend=1
00:12:45.648  		--rc geninfo_all_blocks=1
00:12:45.648  		--rc geninfo_unexecuted_blocks=1
00:12:45.648  		
00:12:45.648  		'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:12:45.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:45.648  		--rc genhtml_branch_coverage=1
00:12:45.648  		--rc genhtml_function_coverage=1
00:12:45.648  		--rc genhtml_legend=1
00:12:45.648  		--rc geninfo_all_blocks=1
00:12:45.648  		--rc geninfo_unexecuted_blocks=1
00:12:45.648  		
00:12:45.648  		'
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:12:45.648  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:12:45.648  		--rc genhtml_branch_coverage=1
00:12:45.648  		--rc genhtml_function_coverage=1
00:12:45.648  		--rc genhtml_legend=1
00:12:45.648  		--rc geninfo_all_blocks=1
00:12:45.648  		--rc geninfo_unexecuted_blocks=1
00:12:45.648  		
00:12:45.648  		'
00:12:45.648    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH
00:12:45.648      05:02:59 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@")
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:12:45.648   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:45.648    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0
00:12:45.648    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id
00:12:45.648    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id
00:12:45.648     05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0'
00:12:46.219    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         20 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]]
00:12:46.219    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID:                             1b36 Subsystem Vendor ID:                   1af4 Serial Number:                         12340 Model Number:                          QEMU NVMe Ctrl Firmware Version:                      8.0.0 Recommended Arb Burst:                 6 IEEE OUI Identifier:                   00 54 52 Multi-path I/O   May have multiple subsystem ports:   No   May have multiple controllers:       No   Associated with SR-IOV VF:           No Max Data Transfer Size:                524288 Max Number of Namespaces:              256 Max Number of I/O Queues:              64 NVMe Specification Version (VS):       1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries:                 2048 Contiguous Queues Required:            Yes Arbitration Mechanisms Supported   Weighted Round Robin:                Not Supported   Vendor Specific:                     Not Supported Reset Timeout:                         7500 ms Doorbell Stride:                       4 bytes NVM Subsystem Reset:                   Not Supported Command Sets Supported   NVM Command Set:                     Supported Boot Partition:                        Not Supported Memory Page Size Minimum:              4096 bytes Memory Page Size Maximum:              65536 bytes Persistent Memory Region:              Not Supported Optional Asynchronous Events Supported   Namespace Attribute Notices:         Supported   Firmware Activation Notices:         Not Supported   ANA Change Notices:                  Not Supported   PLE Aggregate Log Change Notices:    Not Supported   LBA Status Info Alert Notices:       Not Supported   EGE Aggregate Log Change Notices:    Not Supported   Normal NVM Subsystem Shutdown event: Not Supported   Zone Descriptor Change Notices:      Not Supported   Discovery Log Change Notices:        Not Supported Controller Attributes   128-bit Host Identifier:             Not Supported   Non-Operational Permissive Mode:     Not Supported   NVM Sets:                            Not Supported   Read Recovery Levels:                Not Supported   Endurance Groups:                    Not Supported   Predictable Latency Mode:            Not Supported   Traffic Based Keep ALive:            Not Supported   Namespace Granularity:               Not Supported   SQ Associations:                     Not Supported   UUID List:                           Not Supported   Multi-Domain Subsystem:              Not Supported   Fixed Capacity Management:           Not Supported   Variable Capacity Management:        Not Supported   Delete Endurance Group:              Not Supported   Delete NVM Set:                      Not Supported   Extended LBA Formats Supported:      Supported   Flexible Data Placement Supported:   Not Supported  Controller Memory Buffer Support ================================ Supported:                             No  Persistent Memory Region Support ================================ Supported:                             No  Admin Command Set Attributes ============================ Security Send/Receive:                 Not Supported Format NVM:                            Supported Firmware Activate/Download:            Not Supported Namespace Management:                  Supported Device Self-Test:                      Not Supported Directives:                            Supported NVMe-MI:                               Not Supported Virtualization Management:             Not Supported Doorbell Buffer Config:                Supported Get LBA Status Capability:             Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit:                   4 Async Event Request Limit:             4 Number of Firmware Slots:              N/A Firmware Slot 1 Read-Only:             N/A Firmware Activation Without Reset:     N/A Multiple Update Detection Support:     N/A Firmware Update Granularity:           No Information Provided Per-Namespace SMART Log:               Yes Asymmetric Namespace Access Log Page:  Not Supported Subsystem NQN:                         nqn.2019-08.org.qemu:12340 Command Effects Log Page:              Supported Get Log Page Extended Data:            Supported Telemetry Log Pages:                   Not Supported Persistent Event Log Pages:            Not Supported Supported Log Pages Log Page:          May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page:   May Support Data Area 4 for Telemetry Log:         Not Supported Error Log Page Entries Supported:      1 Keep Alive:                            Not Supported  NVM Command Set Attributes ========================== Submission Queue Entry Size   Max:                       64   Min:                       64 Completion Queue Entry Size   Max:                       16   Min:                       16 Number of Namespaces:        256 Compare Command:             Supported Write Uncorrectable Command: Not Supported Dataset Management Command:  Supported Write Zeroes Command:        Supported Set Features Save Field:     Supported Reservations:                Not Supported Timestamp:                   Supported Copy:                        Supported Volatile Write Cache:        Present Atomic Write Unit (Normal):  1 Atomic Write Unit (PFail):   1 Atomic Compare & Write Unit: 1 Fused Compare & Write:       Not Supported Scatter-Gather List   SGL Command Set:           Supported   SGL Keyed:                 Not Supported   SGL Bit Bucket Descriptor: Not Supported   SGL Metadata Pointer:      Not Supported   Oversized SGL:             Not Supported   SGL Metadata Address:      Not Supported   SGL Offset:                Not Supported   Transport SGL Data Block:  Not Supported Replay Protected Memory Block:  Not Supported  Firmware Slot Information ========================= Active slot:                 1 Slot 1 Firmware Revision:    1.0   Commands Supported and Effects ============================== Admin Commands --------------    Delete I/O Submission Queue (00h): Supported     Create I/O Submission Queue (01h): Supported                    Get Log Page (02h): Supported     Delete I/O Completion Queue (04h): Supported     Create I/O Completion Queue (05h): Supported                        Identify (06h): Supported                           Abort (08h): Supported                    Set Features (09h): Supported                    Get Features (0Ah): Supported      Asynchronous Event Request (0Ch): Supported            Namespace Attachment (15h): Supported NS-Inventory-Change                  Directive Send (19h): Supported               Directive Receive (1Ah): Supported       Virtualization Management (1Ch): Supported          Doorbell Buffer Config (7Ch): Supported                      Format NVM (80h): Supported LBA-Change  I/O Commands ------------                          Flush (00h): Supported LBA-Change                           Write (01h): Supported LBA-Change                            Read (02h): Supported                         Compare (05h): Supported                    Write Zeroes (08h): Supported LBA-Change              Dataset Management (09h): Supported LBA-Change                         Unknown (0Ch): Supported                         Unknown (12h): Supported                            Copy (19h): Supported LBA-Change                         Unknown (1Dh): Supported LBA-Change   Error Log =========  Arbitration =========== Arbitration Burst:           no limit  Power Management ================ Number of Power States:          1 Current Power State:             Power State #0 Power State #0:   Max Power:                     25.00 W   Non-Operational State:         Operational   Entry Latency:                 16 microseconds   Exit Latency:                  4 microseconds   Relative Read Throughput:      0   Relative Read Latency:         0   Relative Write Throughput:     0   Relative Write Latency:        0   Idle Power:                     Not Reported   Active Power:                   Not Reported Non-Operational Permissive Mode: Not Supported  Health Information ================== Critical Warnings:   Available Spare Space:     OK   Temperature:               OK   Device Reliability:        OK   Read Only:                 No   Volatile Memory Backup:    OK Current Temperature:         323 Kelvin (50 Celsius) Temperature Threshold:       343 Kelvin (70 Celsius) Available Spare:             0% Available Spare Threshold:   0% Life Percentage Used:        0% Data Units Read:             25 Data Units Written:          3 Host Read Commands:          626 Host Write Commands:         20 Controller Busy Time:        0 minutes Power Cycles:                0 Power On Hours:              0 hours Unsafe Shutdowns:            0 Unrecoverable Media Errors:  0 Lifetime Error Log Entries:  0 Warning Temperature Time:    0 minutes Critical Temperature Time:   0 minutes  Number of Queues ================ Number of I/O Submission Queues:      64 Number of I/O Completion Queues:      64  ZNS Specific Controller Data ============================ Zone Append Size Limit:      0   Active Namespaces ================= Namespace ID:1 Error Recovery Timeout:                Unlimited Command Set Identifier:                NVM (00h) Deallocate:                            Supported Deallocated/Unwritten Error:           Supported Deallocated Read Value:                All 0x00 Deallocate in Write Zeroes:            Not Supported Deallocated Guard Field:               0xFFFF Flush:                                 Supported Reservation:                           Not Supported Namespace Sharing Capabilities:        Private Size (in LBAs):                        1310720 (5GiB) Capacity (in LBAs):                    1310720 (5GiB) Utilization (in LBAs):                 1310720 (5GiB) Thin Provisioning:                     Not Supported Per-NS Atomic Units:                   No Maximum Single Source Range Length:    128 Maximum Copy Length:                   128 Maximum Source Range Count:            128 NGUID/EUI64 Never Reused:              No Namespace Write Protected:             No Number of LBA Formats:                 8 Current LBA Format:                    LBA Format #04 LBA Format #00: Data Size:   512  Metadata Size:     0 LBA Format #01: Data Size:   512  Metadata Size:     8 LBA Format #02: Data Size:   512  Metadata Size:    16 LBA Format #03: Data Size:   512  Metadata Size:    64 LBA Format #04: Data Size:  4096  Metadata Size:     0 LBA Format #05: Data Size:  4096  Metadata Size:     8 LBA Format #06: Data Size:  4096  Metadata Size:    16 LBA Format #07: Data Size:  4096  Metadata Size:    64  NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask:               0 Protection Information Capabilities:   16b Guard Protection Information Storage Tag Support:  No   16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0   Storage Tag Check Read Support:                        No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]]
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # :
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:46.220  ************************************
00:12:46.220  START TEST dd_bs_lt_native_bs
00:12:46.220  ************************************
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:46.220    05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:12:46.220   05:02:59 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61
00:12:46.220  {
00:12:46.220    "subsystems": [
00:12:46.220      {
00:12:46.220        "subsystem": "bdev",
00:12:46.220        "config": [
00:12:46.220          {
00:12:46.220            "params": {
00:12:46.220              "trtype": "pcie",
00:12:46.220              "traddr": "0000:00:10.0",
00:12:46.220              "name": "Nvme0"
00:12:46.220            },
00:12:46.220            "method": "bdev_nvme_attach_controller"
00:12:46.220          },
00:12:46.220          {
00:12:46.220            "method": "bdev_wait_for_examine"
00:12:46.220          }
00:12:46.220        ]
00:12:46.220      }
00:12:46.220    ]
00:12:46.220  }
00:12:46.220  [2024-11-20 05:02:59.984616] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:46.220  [2024-11-20 05:02:59.984899] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130528 ]
00:12:46.220  [2024-11-20 05:03:00.137241] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:46.479  [2024-11-20 05:03:00.172331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:46.479  [2024-11-20 05:03:00.212759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:46.479  [2024-11-20 05:03:00.390298] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size
00:12:46.479  [2024-11-20 05:03:00.390584] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:12:46.738  [2024-11-20 05:03:00.514916] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:12:46.738  
00:12:46.738  real	0m0.717s
00:12:46.738  user	0m0.417s
00:12:46.738  sys	0m0.265s
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x
00:12:46.738  ************************************
00:12:46.738  END TEST dd_bs_lt_native_bs
00:12:46.738  ************************************
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:12:46.738  ************************************
00:12:46.738  START TEST dd_rw
00:12:46.738  ************************************
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64)
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2}
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs)))
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:46.738   05:03:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:47.673   05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62
00:12:47.673    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:47.673    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:47.673    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:47.673  {
00:12:47.673    "subsystems": [
00:12:47.673      {
00:12:47.673        "subsystem": "bdev",
00:12:47.673        "config": [
00:12:47.673          {
00:12:47.673            "params": {
00:12:47.673              "trtype": "pcie",
00:12:47.673              "traddr": "0000:00:10.0",
00:12:47.673              "name": "Nvme0"
00:12:47.673            },
00:12:47.673            "method": "bdev_nvme_attach_controller"
00:12:47.673          },
00:12:47.673          {
00:12:47.673            "method": "bdev_wait_for_examine"
00:12:47.673          }
00:12:47.673        ]
00:12:47.673      }
00:12:47.673    ]
00:12:47.673  }
00:12:47.673  [2024-11-20 05:03:01.341415] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:47.674  [2024-11-20 05:03:01.341705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130576 ]
00:12:47.674  [2024-11-20 05:03:01.490180] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:47.674  [2024-11-20 05:03:01.516192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:47.674  [2024-11-20 05:03:01.550400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:47.932  
[2024-11-20T05:03:02.148Z] Copying: 60/60 [kB] (average 19 MBps)
00:12:48.191  
00:12:48.191   05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62
00:12:48.191    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:48.191    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:48.191    05:03:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:48.191  {
00:12:48.191    "subsystems": [
00:12:48.191      {
00:12:48.191        "subsystem": "bdev",
00:12:48.191        "config": [
00:12:48.191          {
00:12:48.191            "params": {
00:12:48.191              "trtype": "pcie",
00:12:48.191              "traddr": "0000:00:10.0",
00:12:48.191              "name": "Nvme0"
00:12:48.191            },
00:12:48.191            "method": "bdev_nvme_attach_controller"
00:12:48.191          },
00:12:48.191          {
00:12:48.191            "method": "bdev_wait_for_examine"
00:12:48.191          }
00:12:48.191        ]
00:12:48.191      }
00:12:48.191    ]
00:12:48.191  }
00:12:48.191  [2024-11-20 05:03:02.032353] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:48.191  [2024-11-20 05:03:02.032637] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130589 ]
00:12:48.450  [2024-11-20 05:03:02.181786] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:48.450  [2024-11-20 05:03:02.206152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:48.450  [2024-11-20 05:03:02.241113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:48.450  
[2024-11-20T05:03:02.666Z] Copying: 60/60 [kB] (average 29 MBps)
00:12:48.709  
00:12:48.709   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:48.968   05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:48.968    05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:48.968    05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:48.968    05:03:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:48.968  {
00:12:48.968    "subsystems": [
00:12:48.968      {
00:12:48.968        "subsystem": "bdev",
00:12:48.968        "config": [
00:12:48.968          {
00:12:48.968            "params": {
00:12:48.968              "trtype": "pcie",
00:12:48.968              "traddr": "0000:00:10.0",
00:12:48.968              "name": "Nvme0"
00:12:48.968            },
00:12:48.968            "method": "bdev_nvme_attach_controller"
00:12:48.968          },
00:12:48.968          {
00:12:48.968            "method": "bdev_wait_for_examine"
00:12:48.968          }
00:12:48.968        ]
00:12:48.968      }
00:12:48.968    ]
00:12:48.968  }
00:12:48.968  [2024-11-20 05:03:02.729841] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:48.968  [2024-11-20 05:03:02.730306] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130605 ]
00:12:48.968  [2024-11-20 05:03:02.883917] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:48.968  [2024-11-20 05:03:02.910753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:49.226  [2024-11-20 05:03:02.958948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:49.226  
[2024-11-20T05:03:03.442Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:12:49.485  
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:49.485   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:50.053   05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62
00:12:50.053    05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:50.053    05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:50.053    05:03:03 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:50.053  {
00:12:50.053    "subsystems": [
00:12:50.053      {
00:12:50.053        "subsystem": "bdev",
00:12:50.053        "config": [
00:12:50.053          {
00:12:50.053            "params": {
00:12:50.053              "trtype": "pcie",
00:12:50.053              "traddr": "0000:00:10.0",
00:12:50.053              "name": "Nvme0"
00:12:50.053            },
00:12:50.053            "method": "bdev_nvme_attach_controller"
00:12:50.053          },
00:12:50.053          {
00:12:50.053            "method": "bdev_wait_for_examine"
00:12:50.053          }
00:12:50.053        ]
00:12:50.053      }
00:12:50.053    ]
00:12:50.053  }
00:12:50.053  [2024-11-20 05:03:03.984915] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:50.053  [2024-11-20 05:03:03.985185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130625 ]
00:12:50.311  [2024-11-20 05:03:04.134795] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:50.311  [2024-11-20 05:03:04.160424] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:50.311  [2024-11-20 05:03:04.193168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:50.570  
[2024-11-20T05:03:04.786Z] Copying: 60/60 [kB] (average 58 MBps)
00:12:50.829  
00:12:50.829   05:03:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62
00:12:50.829    05:03:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:50.829    05:03:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:50.829    05:03:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:50.829  {
00:12:50.829    "subsystems": [
00:12:50.829      {
00:12:50.829        "subsystem": "bdev",
00:12:50.829        "config": [
00:12:50.829          {
00:12:50.829            "params": {
00:12:50.829              "trtype": "pcie",
00:12:50.829              "traddr": "0000:00:10.0",
00:12:50.829              "name": "Nvme0"
00:12:50.829            },
00:12:50.829            "method": "bdev_nvme_attach_controller"
00:12:50.829          },
00:12:50.829          {
00:12:50.829            "method": "bdev_wait_for_examine"
00:12:50.829          }
00:12:50.829        ]
00:12:50.829      }
00:12:50.829    ]
00:12:50.829  }
00:12:50.829  [2024-11-20 05:03:04.660885] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:50.829  [2024-11-20 05:03:04.661154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130645 ]
00:12:51.088  [2024-11-20 05:03:04.809178] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:51.088  [2024-11-20 05:03:04.834043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:51.088  [2024-11-20 05:03:04.862654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:51.088  
[2024-11-20T05:03:05.303Z] Copying: 60/60 [kB] (average 58 MBps)
00:12:51.347  
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:51.347   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:51.347    05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:51.347    05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:51.347    05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:51.606  [2024-11-20 05:03:05.341465] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:51.606  [2024-11-20 05:03:05.341764] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130666 ]
00:12:51.606  {
00:12:51.606    "subsystems": [
00:12:51.606      {
00:12:51.606        "subsystem": "bdev",
00:12:51.606        "config": [
00:12:51.606          {
00:12:51.606            "params": {
00:12:51.606              "trtype": "pcie",
00:12:51.606              "traddr": "0000:00:10.0",
00:12:51.606              "name": "Nvme0"
00:12:51.606            },
00:12:51.606            "method": "bdev_nvme_attach_controller"
00:12:51.606          },
00:12:51.606          {
00:12:51.606            "method": "bdev_wait_for_examine"
00:12:51.606          }
00:12:51.606        ]
00:12:51.606      }
00:12:51.606    ]
00:12:51.606  }
00:12:51.606  [2024-11-20 05:03:05.489654] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:51.606  [2024-11-20 05:03:05.515044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:51.606  [2024-11-20 05:03:05.543642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:51.864  
[2024-11-20T05:03:06.080Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:52.123  
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:52.123   05:03:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:52.690   05:03:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62
00:12:52.690    05:03:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:52.690    05:03:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:52.691    05:03:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:52.691  {
00:12:52.691    "subsystems": [
00:12:52.691      {
00:12:52.691        "subsystem": "bdev",
00:12:52.691        "config": [
00:12:52.691          {
00:12:52.691            "params": {
00:12:52.691              "trtype": "pcie",
00:12:52.691              "traddr": "0000:00:10.0",
00:12:52.691              "name": "Nvme0"
00:12:52.691            },
00:12:52.691            "method": "bdev_nvme_attach_controller"
00:12:52.691          },
00:12:52.691          {
00:12:52.691            "method": "bdev_wait_for_examine"
00:12:52.691          }
00:12:52.691        ]
00:12:52.691      }
00:12:52.691    ]
00:12:52.691  }
00:12:52.691  [2024-11-20 05:03:06.553361] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:52.691  [2024-11-20 05:03:06.553684] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130686 ]
00:12:52.950  [2024-11-20 05:03:06.701516] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:52.950  [2024-11-20 05:03:06.726798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:52.950  [2024-11-20 05:03:06.755948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:53.208  
[2024-11-20T05:03:07.165Z] Copying: 56/56 [kB] (average 27 MBps)
00:12:53.208  
00:12:53.208   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62
00:12:53.467    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:53.467    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:53.467    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:53.467  {
00:12:53.467    "subsystems": [
00:12:53.467      {
00:12:53.467        "subsystem": "bdev",
00:12:53.467        "config": [
00:12:53.467          {
00:12:53.467            "params": {
00:12:53.467              "trtype": "pcie",
00:12:53.467              "traddr": "0000:00:10.0",
00:12:53.467              "name": "Nvme0"
00:12:53.467            },
00:12:53.467            "method": "bdev_nvme_attach_controller"
00:12:53.467          },
00:12:53.467          {
00:12:53.467            "method": "bdev_wait_for_examine"
00:12:53.467          }
00:12:53.467        ]
00:12:53.467      }
00:12:53.467    ]
00:12:53.467  }
00:12:53.467  [2024-11-20 05:03:07.221526] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:53.467  [2024-11-20 05:03:07.221793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130694 ]
00:12:53.467  [2024-11-20 05:03:07.370619] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:53.467  [2024-11-20 05:03:07.395626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:53.726  [2024-11-20 05:03:07.424599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:53.726  
[2024-11-20T05:03:07.942Z] Copying: 56/56 [kB] (average 54 MBps)
00:12:53.985  
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:53.985   05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:53.985    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:53.985    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:53.985    05:03:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:53.985  {
00:12:53.985    "subsystems": [
00:12:53.985      {
00:12:53.985        "subsystem": "bdev",
00:12:53.985        "config": [
00:12:53.985          {
00:12:53.985            "params": {
00:12:53.985              "trtype": "pcie",
00:12:53.985              "traddr": "0000:00:10.0",
00:12:53.985              "name": "Nvme0"
00:12:53.985            },
00:12:53.985            "method": "bdev_nvme_attach_controller"
00:12:53.985          },
00:12:53.985          {
00:12:53.985            "method": "bdev_wait_for_examine"
00:12:53.985          }
00:12:53.985        ]
00:12:53.985      }
00:12:53.985    ]
00:12:53.985  }
00:12:53.985  [2024-11-20 05:03:07.893522] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:53.985  [2024-11-20 05:03:07.893803] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130715 ]
00:12:54.246  [2024-11-20 05:03:08.042608] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:54.246  [2024-11-20 05:03:08.068278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:54.246  [2024-11-20 05:03:08.100039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:54.505  
[2024-11-20T05:03:08.721Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:54.764  
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:54.764   05:03:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:55.332   05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62
00:12:55.332    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:55.332    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:55.332    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:55.332  {
00:12:55.332    "subsystems": [
00:12:55.332      {
00:12:55.332        "subsystem": "bdev",
00:12:55.332        "config": [
00:12:55.332          {
00:12:55.332            "params": {
00:12:55.332              "trtype": "pcie",
00:12:55.332              "traddr": "0000:00:10.0",
00:12:55.332              "name": "Nvme0"
00:12:55.332            },
00:12:55.332            "method": "bdev_nvme_attach_controller"
00:12:55.332          },
00:12:55.332          {
00:12:55.332            "method": "bdev_wait_for_examine"
00:12:55.332          }
00:12:55.332        ]
00:12:55.332      }
00:12:55.332    ]
00:12:55.332  }
00:12:55.332  [2024-11-20 05:03:09.099864] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:55.332  [2024-11-20 05:03:09.100141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130737 ]
00:12:55.332  [2024-11-20 05:03:09.248213] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:55.332  [2024-11-20 05:03:09.269097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:55.590  [2024-11-20 05:03:09.299232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:55.590  
[2024-11-20T05:03:09.806Z] Copying: 56/56 [kB] (average 54 MBps)
00:12:55.849  
00:12:55.849   05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62
00:12:55.849    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:55.849    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:55.849    05:03:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:55.849  [2024-11-20 05:03:09.768478] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:55.849  {
00:12:55.849    "subsystems": [
00:12:55.849      {
00:12:55.849        "subsystem": "bdev",
00:12:55.849        "config": [
00:12:55.849          {
00:12:55.849            "params": {
00:12:55.849              "trtype": "pcie",
00:12:55.849              "traddr": "0000:00:10.0",
00:12:55.849              "name": "Nvme0"
00:12:55.849            },
00:12:55.849            "method": "bdev_nvme_attach_controller"
00:12:55.849          },
00:12:55.849          {
00:12:55.849            "method": "bdev_wait_for_examine"
00:12:55.849          }
00:12:55.849        ]
00:12:55.849      }
00:12:55.849    ]
00:12:55.849  }
00:12:55.849  [2024-11-20 05:03:09.769220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130757 ]
00:12:56.108  [2024-11-20 05:03:09.917239] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:56.108  [2024-11-20 05:03:09.942165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:56.108  [2024-11-20 05:03:09.970770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:56.366  
[2024-11-20T05:03:10.581Z] Copying: 56/56 [kB] (average 54 MBps)
00:12:56.624  
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:56.624   05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:56.624    05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:56.624    05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:56.624    05:03:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:56.624  {
00:12:56.624    "subsystems": [
00:12:56.624      {
00:12:56.624        "subsystem": "bdev",
00:12:56.624        "config": [
00:12:56.624          {
00:12:56.624            "params": {
00:12:56.624              "trtype": "pcie",
00:12:56.624              "traddr": "0000:00:10.0",
00:12:56.624              "name": "Nvme0"
00:12:56.624            },
00:12:56.624            "method": "bdev_nvme_attach_controller"
00:12:56.624          },
00:12:56.624          {
00:12:56.624            "method": "bdev_wait_for_examine"
00:12:56.624          }
00:12:56.624        ]
00:12:56.624      }
00:12:56.624    ]
00:12:56.624  }
00:12:56.624  [2024-11-20 05:03:10.458986] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:56.624  [2024-11-20 05:03:10.459235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130773 ]
00:12:56.883  [2024-11-20 05:03:10.606708] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:56.883  [2024-11-20 05:03:10.632583] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:56.883  [2024-11-20 05:03:10.661720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:56.883  
[2024-11-20T05:03:11.099Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:12:57.142  
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}"
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:57.142   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:57.709   05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62
00:12:57.709    05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:12:57.709    05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:57.709    05:03:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:57.709  {
00:12:57.709    "subsystems": [
00:12:57.709      {
00:12:57.709        "subsystem": "bdev",
00:12:57.709        "config": [
00:12:57.709          {
00:12:57.709            "params": {
00:12:57.709              "trtype": "pcie",
00:12:57.709              "traddr": "0000:00:10.0",
00:12:57.709              "name": "Nvme0"
00:12:57.709            },
00:12:57.709            "method": "bdev_nvme_attach_controller"
00:12:57.709          },
00:12:57.709          {
00:12:57.709            "method": "bdev_wait_for_examine"
00:12:57.709          }
00:12:57.709        ]
00:12:57.709      }
00:12:57.709    ]
00:12:57.709  }
00:12:57.709  [2024-11-20 05:03:11.548622] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:57.709  [2024-11-20 05:03:11.548893] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130793 ]
00:12:57.969  [2024-11-20 05:03:11.696709] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:57.969  [2024-11-20 05:03:11.722064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:57.969  [2024-11-20 05:03:11.751644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:57.969  
[2024-11-20T05:03:12.185Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:58.228  
00:12:58.228   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62
00:12:58.228    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:12:58.228    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:58.228    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:58.493  {
00:12:58.493    "subsystems": [
00:12:58.493      {
00:12:58.493        "subsystem": "bdev",
00:12:58.493        "config": [
00:12:58.493          {
00:12:58.493            "params": {
00:12:58.493              "trtype": "pcie",
00:12:58.493              "traddr": "0000:00:10.0",
00:12:58.493              "name": "Nvme0"
00:12:58.493            },
00:12:58.493            "method": "bdev_nvme_attach_controller"
00:12:58.493          },
00:12:58.493          {
00:12:58.493            "method": "bdev_wait_for_examine"
00:12:58.493          }
00:12:58.493        ]
00:12:58.493      }
00:12:58.493    ]
00:12:58.493  }
00:12:58.493  [2024-11-20 05:03:12.219668] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:58.493  [2024-11-20 05:03:12.219945] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130806 ]
00:12:58.493  [2024-11-20 05:03:12.368860] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:58.493  [2024-11-20 05:03:12.394398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:58.493  [2024-11-20 05:03:12.425830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:58.770  
[2024-11-20T05:03:12.987Z] Copying: 48/48 [kB] (average 46 MBps)
00:12:59.030  
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:12:59.030   05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:12:59.030    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:12:59.030    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:12:59.030    05:03:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:12:59.030  {
00:12:59.030    "subsystems": [
00:12:59.030      {
00:12:59.030        "subsystem": "bdev",
00:12:59.030        "config": [
00:12:59.030          {
00:12:59.030            "params": {
00:12:59.030              "trtype": "pcie",
00:12:59.030              "traddr": "0000:00:10.0",
00:12:59.030              "name": "Nvme0"
00:12:59.030            },
00:12:59.030            "method": "bdev_nvme_attach_controller"
00:12:59.030          },
00:12:59.030          {
00:12:59.030            "method": "bdev_wait_for_examine"
00:12:59.030          }
00:12:59.030        ]
00:12:59.030      }
00:12:59.030    ]
00:12:59.030  }
00:12:59.030  [2024-11-20 05:03:12.914190] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:12:59.030  [2024-11-20 05:03:12.914488] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130827 ]
00:12:59.288  [2024-11-20 05:03:13.066771] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:12:59.288  [2024-11-20 05:03:13.091977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:12:59.288  [2024-11-20 05:03:13.121116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:12:59.546  
[2024-11-20T05:03:13.761Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:12:59.804  
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}"
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable
00:12:59.804   05:03:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:13:00.063   05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62
00:13:00.063    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf
00:13:00.063    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:13:00.063    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:13:00.321  {
00:13:00.321    "subsystems": [
00:13:00.321      {
00:13:00.321        "subsystem": "bdev",
00:13:00.321        "config": [
00:13:00.321          {
00:13:00.321            "params": {
00:13:00.321              "trtype": "pcie",
00:13:00.321              "traddr": "0000:00:10.0",
00:13:00.321              "name": "Nvme0"
00:13:00.321            },
00:13:00.321            "method": "bdev_nvme_attach_controller"
00:13:00.321          },
00:13:00.321          {
00:13:00.321            "method": "bdev_wait_for_examine"
00:13:00.321          }
00:13:00.321        ]
00:13:00.321      }
00:13:00.321    ]
00:13:00.321  }
00:13:00.321  [2024-11-20 05:03:14.076399] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:00.321  [2024-11-20 05:03:14.077133] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130847 ]
00:13:00.321  [2024-11-20 05:03:14.226622] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:00.321  [2024-11-20 05:03:14.251057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:00.579  [2024-11-20 05:03:14.289889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:00.579  
[2024-11-20T05:03:14.795Z] Copying: 48/48 [kB] (average 46 MBps)
00:13:00.838  
00:13:00.838   05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62
00:13:00.838    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf
00:13:00.838    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:13:00.838    05:03:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:13:00.838  [2024-11-20 05:03:14.764111] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:00.838  [2024-11-20 05:03:14.764385] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130862 ]
00:13:00.838  {
00:13:00.838    "subsystems": [
00:13:00.838      {
00:13:00.838        "subsystem": "bdev",
00:13:00.838        "config": [
00:13:00.838          {
00:13:00.838            "params": {
00:13:00.838              "trtype": "pcie",
00:13:00.838              "traddr": "0000:00:10.0",
00:13:00.838              "name": "Nvme0"
00:13:00.838            },
00:13:00.838            "method": "bdev_nvme_attach_controller"
00:13:00.838          },
00:13:00.838          {
00:13:00.838            "method": "bdev_wait_for_examine"
00:13:00.838          }
00:13:00.838        ]
00:13:00.838      }
00:13:00.838    ]
00:13:00.838  }
00:13:01.097  [2024-11-20 05:03:14.914866] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:01.097  [2024-11-20 05:03:14.942118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:01.097  [2024-11-20 05:03:14.985954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:01.356  
[2024-11-20T05:03:15.571Z] Copying: 48/48 [kB] (average 46 MBps)
00:13:01.614  
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref=
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1
00:13:01.614   05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:13:01.614    05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf
00:13:01.614    05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable
00:13:01.614    05:03:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:13:01.614  [2024-11-20 05:03:15.448324] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:01.614  [2024-11-20 05:03:15.448552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130883 ]
00:13:01.614  {
00:13:01.614    "subsystems": [
00:13:01.614      {
00:13:01.614        "subsystem": "bdev",
00:13:01.614        "config": [
00:13:01.614          {
00:13:01.614            "params": {
00:13:01.614              "trtype": "pcie",
00:13:01.614              "traddr": "0000:00:10.0",
00:13:01.614              "name": "Nvme0"
00:13:01.614            },
00:13:01.614            "method": "bdev_nvme_attach_controller"
00:13:01.614          },
00:13:01.614          {
00:13:01.614            "method": "bdev_wait_for_examine"
00:13:01.614          }
00:13:01.614        ]
00:13:01.614      }
00:13:01.614    ]
00:13:01.614  }
00:13:01.873  [2024-11-20 05:03:15.583917] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:01.873  [2024-11-20 05:03:15.608550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:01.873  [2024-11-20 05:03:15.641167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:01.873  
[2024-11-20T05:03:16.089Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:13:02.132  
00:13:02.132  
00:13:02.132  real	0m15.385s
00:13:02.132  user	0m10.073s
00:13:02.132  sys	0m3.759s
00:13:02.132   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:02.132  ************************************
00:13:02.132   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x
00:13:02.132  END TEST dd_rw
00:13:02.132  ************************************
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:13:02.391  ************************************
00:13:02.391  START TEST dd_rw_offset
00:13:02.391  ************************************
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 ))
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=3dxlow88wkspkoxzzgp8ree78vnvxz0xl1k6w3a8a9vlon0nugmvn9ku7v17tnp3e7g67hlolbvgsmycrrd43w86ienwgwdhgpw74kg4fxaz83ecu76gpz91jcvbv7yzrhfedbkf027w113qc4gw9f0u0zg2zgx41gm74zlouj8b7o1itlgyv71z3s52ra8d6hk9vosqk1x0g361lvpeh4kgp6u5wurocxcjclp6q02ei2bmpy7vknedmi2xg9pcbvtpscsb8d5e4oswcl0h5jd1awd43n7k4qmlls0ie8w463lxynj8qy5guo52ttgjrnoibd3pg538em0uu5tor09hc2f7mo1n045alh1uzwcz1ci710daa4uy5q0q5pnpiif0m01de6x8eu0akfbppx929sgvpc593b0zistwfqbjzda0ufm3cy46bq08rm0h2q1yk75hjqn6qudpu8p27r10z08lzcytd5xinm4ur1b5nz535purh3q4a40mdi8nknxi9j84gsm99qvvimknxv8ixq0qi8hg3wd0huy6wjkhh6u0n9toctyca9vtw5zlq00u8f3qwvimug0tir8vav8ipfjwi54phpqpxk7e96qvbfrapja323ju2o5e1coommdmfu7w7tr2g78sj8ogdtuw0sw4ew98wt0qe24alirpl5stf9if5slu3v9q46rejsk87h6d8pa5fef4rbqt4hvobnjcczxv9g1poh8oj4vmvuzssqj71jdoovwes6vje65l4nc9pyhhhedmh5078r4du7uvwmj5tnrhpfqtmja3idr9nh7nioiau62bssc1co6lq7i1n7o8mltoe2lfrycgzdsw6mroqngcj9ejdtloiruwum943qiid5i7aca2lww27bkbkcd9m7o898oosr52catudox0lkth84bzp4izdoi6hj4pt4rxj6m0uslt0yj1v4wijsy00rwjwzpibxthukin8bc8xuqgagf50t29cqymyrtm4rn87tsij5hw2fz4lvhzjo4ba7txt0xfjimaf0z7z9l8292evlctdcc340gqyj0t7kj2g3ks8pvx3d5ewj6u0fk4bm834z9btfkjz9mdg4tgl5111beyvf1stuayt760nyz7yjb3jy4ynu4z28dv79tuyc7plvo9qa2d8sefbcg74to85istwn4u2n1csn7j5pce6ohxtywx744dqbd1apnq7sn52q3opvr14twlqu5dycz4elkadxxgch39we1q7qefbr1h16ymktbt6dcjlft2f5uhzl7vgxwd7dywx29ax3re6feyeo4brfwptstkzs3j78lpwkbqsg6r8zpb4cmfcbxtg0k2cdvpv08ciqcd0s68nn6clxod1d27zwnujr1idcw5itvu5j483328insf4zazojege61ts36wg5hv5mstvyuz9dxgreyp0cxs95noxe957m53zw8dqds14vs6ucw4quxmma2cymrce9ipqk8a91sp2erv35gb9i1ey90klnou5ymvle0orbm0aueazok5u130b3xo6xvevjiubqy58gcaj4oscz8to2n64ixbvz4yf8strgujfjqj9drnq1pbsdlxtm9n4miuouwnz4smfa2bsyzb10i8p0x1760joteki2gbxsvnxobeml7kn2732k05hpn10f8y88dam8hwet1sbe2tltnlrhiiq3lv26f5pwtyl3vfmraptp6zyyfwi2svbzouudplqjyc9wez3bmpbrs7f9qkpdgdsil8oeeqcqx2kqz57wpd1t4g4e12ap2ped45d0bm2urg9lsrjl7mikuxqty96x4nq9x7ntc783kj6h7zaagneea5z3h0sr0uv78ow78z25oag7lzq19rbmq2rkce3m6awwacgrdq80w96n8c0sm5ych0b9m575gf525xtx7sb6wy3lt0beu6ccgt9sg9gbum35upndl0hexusjnaszg6973ouhdltsnc0e76j7r7iqhwovb5lbwi2rn64ytcb357uvyzroqrggv6xukpkkbrc9k4mvkl45nko7zfvoopx7xcw3s1yyzvnnobhd8qdyoa2c2te290gerxioqdpdhgfl2208wvwuc3dkszbln9jnm2t56vvw5qavzx5cvqfwc8eqcs5w7gk0ygulje9dk1nyqfa6ikv418sm1uoblf8f9rlus1nxbazrpsnvnf9l9oxwbi5ttldfrmrqy7wmj1pvuh2pfx0mu98j4pp10ugf9k9joq2yjs55d5idpgjgqsrkkqcx0mo0vzew6jkb4ciaidko6ets3agd7kpg4c5732s7su6n84r8km7t2uydys7oajfsbyeqbq23tirnt5le5jcbb2tpiiy3v8111su0kn9imppljwa4k7bzdsjf0llslsn1nbeyiuytlzrp369plcodnsiwxl54gjpyum5rlibvgf3wl0baqx2efd6nbz0xcgdokglredrs99vzvs7zpmjjo3a6a8jbx51mupc69nbontcs93bvcaooyu7873kj7cvd0l0oyukz9x6mg04bcwsb5mwu1j2fxpcz27wn2l53438g4s024zvpe15kafozsznbqbmus4ubzrha03pb3bkqrhgxsqk12xg34xnwwe5at9y62ve48286ac4s4j49o4yt1tlo2huh593g42hoh8eefnv5ftcgrtmm52twhsvf9gt2s1l8haneg0mngg5hhi8gyu9n06dmkit9fa1g11yr4v0wl67qa897phxzpax62mv8tox8ctnul75i292hl62hneygc0voxlnsyn3rk9m3g6jw915ep9cn1ssyuixx9sp8gwpr45new2m6cqgh40zpoogrosyuck6nz5xaz6cdl7g0qhscrt00ddq4rnwdzb96tq07rkhz8r616mafx2g4j6ruqtfn2kpkzzgs389ug7r8maujm4bgfsnkoc6g0bzsq6ry2gcrhv6srq9o9eavjdsfw8z5rdyjt6mzdri3sv6spela7rjnfycgq791t44ydw3k10n76z7ubcut9xsngr7k2ef20kpj61evm9frinbsiizobdr923zz6ndjxm0rc6qinqqiytjxpihf8vhvzi2xljckko2s6rk6l2ugpqvfoiroraplg4spg91y47kjsz24z31ya9e16hx8rwfzq2shi6ntabg8bgk3rhrr8mn1ybmc6uf1p7nc2km7ept9urqkg8ppvmh2hpov0zrk4jck0othr3pw4fj3tpl9ny4msowo533m97ufys7kpwtyd2rm9jq2k2bip4ymyknkke0dtaxu6ku57hj1fbkm0jus3rlf451jy24ph3puriq505bit7bf99hew6qdhu6aod6ftuwrovi8g5yln84uvtu71391mxey1fn2cm6qc6gjz77lgg4duuifvty5uba2sw41drsyapw190ylotxq4wna1ockcgu6x9wqmn4xd83pgsr82bi8j4yjyylr39ehtty75cruu2p6hx4siofc1g5cabb16omqn0ome89tn37dzvsd18g7d0i4dbiro7ajvtohny22hklunt1byxtbhh9yjhsqqeq7qipzks2o9fim0vd7fxpjwlw0zcskqfxakmhoihp2yrg3xv2o3gw5eyawvmt6kbfcwbhipq6sj37gvp1dz30vkngv6p4ascu8b6cirhvuv741hk6fmqvzxjrb37xnuxh9izaugytjkfrplq1g7da70aiocbjy1qz8cw12mgty3mzb6c2ryxuo8fuptyzwh5h8sozp7o1yp499th1huktl75mvcuktotnxi8ikli6cfohhuqfhga7pmuao2ple9v39ji8etq79bdcpjlx12jetqpmp9uln294p32nsi9ea77n4a8fmnnp8zbqe4hcjmno2ehb5mnpnyxcjvd7eai6nzprc12klitqsfgllzdw9zlgvrlu7lls7c4vwevwf171pnnfvskm5nqnyqinn756mvi9sq6lhb6k7hudzdxd3d4u68uq7p9gc2x5b426uopln59p2ccp5v7a2a60fhuhx9hmt1ku8s412vxnp4f
00:13:02.391   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62
00:13:02.391    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf
00:13:02.391    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:13:02.391    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:13:02.391  [2024-11-20 05:03:16.180749] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:02.391  [2024-11-20 05:03:16.180988] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130918 ]
00:13:02.391  {
00:13:02.391    "subsystems": [
00:13:02.391      {
00:13:02.391        "subsystem": "bdev",
00:13:02.391        "config": [
00:13:02.391          {
00:13:02.391            "params": {
00:13:02.391              "trtype": "pcie",
00:13:02.391              "traddr": "0000:00:10.0",
00:13:02.391              "name": "Nvme0"
00:13:02.391            },
00:13:02.391            "method": "bdev_nvme_attach_controller"
00:13:02.391          },
00:13:02.391          {
00:13:02.391            "method": "bdev_wait_for_examine"
00:13:02.391          }
00:13:02.391        ]
00:13:02.391      }
00:13:02.391    ]
00:13:02.391  }
00:13:02.391  [2024-11-20 05:03:16.316826] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:02.391  [2024-11-20 05:03:16.342146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:02.650  [2024-11-20 05:03:16.376884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:02.650  
[2024-11-20T05:03:16.866Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:13:02.909  
00:13:02.909   05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62
00:13:02.909    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf
00:13:02.909    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable
00:13:02.909    05:03:16 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:13:02.909  {
00:13:02.909    "subsystems": [
00:13:02.909      {
00:13:02.909        "subsystem": "bdev",
00:13:02.909        "config": [
00:13:02.909          {
00:13:02.909            "params": {
00:13:02.909              "trtype": "pcie",
00:13:02.909              "traddr": "0000:00:10.0",
00:13:02.909              "name": "Nvme0"
00:13:02.909            },
00:13:02.909            "method": "bdev_nvme_attach_controller"
00:13:02.909          },
00:13:02.909          {
00:13:02.909            "method": "bdev_wait_for_examine"
00:13:02.909          }
00:13:02.909        ]
00:13:02.909      }
00:13:02.909    ]
00:13:02.909  }
00:13:02.909  [2024-11-20 05:03:16.850548] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:02.909  [2024-11-20 05:03:16.850846] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130935 ]
00:13:03.167  [2024-11-20 05:03:17.000503] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:03.167  [2024-11-20 05:03:17.026021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:03.167  [2024-11-20 05:03:17.063214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:03.426  
[2024-11-20T05:03:17.643Z] Copying: 4096/4096 [B] (average 4000 kBps)
00:13:03.686  
00:13:03.686   05:03:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ 3dxlow88wkspkoxzzgp8ree78vnvxz0xl1k6w3a8a9vlon0nugmvn9ku7v17tnp3e7g67hlolbvgsmycrrd43w86ienwgwdhgpw74kg4fxaz83ecu76gpz91jcvbv7yzrhfedbkf027w113qc4gw9f0u0zg2zgx41gm74zlouj8b7o1itlgyv71z3s52ra8d6hk9vosqk1x0g361lvpeh4kgp6u5wurocxcjclp6q02ei2bmpy7vknedmi2xg9pcbvtpscsb8d5e4oswcl0h5jd1awd43n7k4qmlls0ie8w463lxynj8qy5guo52ttgjrnoibd3pg538em0uu5tor09hc2f7mo1n045alh1uzwcz1ci710daa4uy5q0q5pnpiif0m01de6x8eu0akfbppx929sgvpc593b0zistwfqbjzda0ufm3cy46bq08rm0h2q1yk75hjqn6qudpu8p27r10z08lzcytd5xinm4ur1b5nz535purh3q4a40mdi8nknxi9j84gsm99qvvimknxv8ixq0qi8hg3wd0huy6wjkhh6u0n9toctyca9vtw5zlq00u8f3qwvimug0tir8vav8ipfjwi54phpqpxk7e96qvbfrapja323ju2o5e1coommdmfu7w7tr2g78sj8ogdtuw0sw4ew98wt0qe24alirpl5stf9if5slu3v9q46rejsk87h6d8pa5fef4rbqt4hvobnjcczxv9g1poh8oj4vmvuzssqj71jdoovwes6vje65l4nc9pyhhhedmh5078r4du7uvwmj5tnrhpfqtmja3idr9nh7nioiau62bssc1co6lq7i1n7o8mltoe2lfrycgzdsw6mroqngcj9ejdtloiruwum943qiid5i7aca2lww27bkbkcd9m7o898oosr52catudox0lkth84bzp4izdoi6hj4pt4rxj6m0uslt0yj1v4wijsy00rwjwzpibxthukin8bc8xuqgagf50t29cqymyrtm4rn87tsij5hw2fz4lvhzjo4ba7txt0xfjimaf0z7z9l8292evlctdcc340gqyj0t7kj2g3ks8pvx3d5ewj6u0fk4bm834z9btfkjz9mdg4tgl5111beyvf1stuayt760nyz7yjb3jy4ynu4z28dv79tuyc7plvo9qa2d8sefbcg74to85istwn4u2n1csn7j5pce6ohxtywx744dqbd1apnq7sn52q3opvr14twlqu5dycz4elkadxxgch39we1q7qefbr1h16ymktbt6dcjlft2f5uhzl7vgxwd7dywx29ax3re6feyeo4brfwptstkzs3j78lpwkbqsg6r8zpb4cmfcbxtg0k2cdvpv08ciqcd0s68nn6clxod1d27zwnujr1idcw5itvu5j483328insf4zazojege61ts36wg5hv5mstvyuz9dxgreyp0cxs95noxe957m53zw8dqds14vs6ucw4quxmma2cymrce9ipqk8a91sp2erv35gb9i1ey90klnou5ymvle0orbm0aueazok5u130b3xo6xvevjiubqy58gcaj4oscz8to2n64ixbvz4yf8strgujfjqj9drnq1pbsdlxtm9n4miuouwnz4smfa2bsyzb10i8p0x1760joteki2gbxsvnxobeml7kn2732k05hpn10f8y88dam8hwet1sbe2tltnlrhiiq3lv26f5pwtyl3vfmraptp6zyyfwi2svbzouudplqjyc9wez3bmpbrs7f9qkpdgdsil8oeeqcqx2kqz57wpd1t4g4e12ap2ped45d0bm2urg9lsrjl7mikuxqty96x4nq9x7ntc783kj6h7zaagneea5z3h0sr0uv78ow78z25oag7lzq19rbmq2rkce3m6awwacgrdq80w96n8c0sm5ych0b9m575gf525xtx7sb6wy3lt0beu6ccgt9sg9gbum35upndl0hexusjnaszg6973ouhdltsnc0e76j7r7iqhwovb5lbwi2rn64ytcb357uvyzroqrggv6xukpkkbrc9k4mvkl45nko7zfvoopx7xcw3s1yyzvnnobhd8qdyoa2c2te290gerxioqdpdhgfl2208wvwuc3dkszbln9jnm2t56vvw5qavzx5cvqfwc8eqcs5w7gk0ygulje9dk1nyqfa6ikv418sm1uoblf8f9rlus1nxbazrpsnvnf9l9oxwbi5ttldfrmrqy7wmj1pvuh2pfx0mu98j4pp10ugf9k9joq2yjs55d5idpgjgqsrkkqcx0mo0vzew6jkb4ciaidko6ets3agd7kpg4c5732s7su6n84r8km7t2uydys7oajfsbyeqbq23tirnt5le5jcbb2tpiiy3v8111su0kn9imppljwa4k7bzdsjf0llslsn1nbeyiuytlzrp369plcodnsiwxl54gjpyum5rlibvgf3wl0baqx2efd6nbz0xcgdokglredrs99vzvs7zpmjjo3a6a8jbx51mupc69nbontcs93bvcaooyu7873kj7cvd0l0oyukz9x6mg04bcwsb5mwu1j2fxpcz27wn2l53438g4s024zvpe15kafozsznbqbmus4ubzrha03pb3bkqrhgxsqk12xg34xnwwe5at9y62ve48286ac4s4j49o4yt1tlo2huh593g42hoh8eefnv5ftcgrtmm52twhsvf9gt2s1l8haneg0mngg5hhi8gyu9n06dmkit9fa1g11yr4v0wl67qa897phxzpax62mv8tox8ctnul75i292hl62hneygc0voxlnsyn3rk9m3g6jw915ep9cn1ssyuixx9sp8gwpr45new2m6cqgh40zpoogrosyuck6nz5xaz6cdl7g0qhscrt00ddq4rnwdzb96tq07rkhz8r616mafx2g4j6ruqtfn2kpkzzgs389ug7r8maujm4bgfsnkoc6g0bzsq6ry2gcrhv6srq9o9eavjdsfw8z5rdyjt6mzdri3sv6spela7rjnfycgq791t44ydw3k10n76z7ubcut9xsngr7k2ef20kpj61evm9frinbsiizobdr923zz6ndjxm0rc6qinqqiytjxpihf8vhvzi2xljckko2s6rk6l2ugpqvfoiroraplg4spg91y47kjsz24z31ya9e16hx8rwfzq2shi6ntabg8bgk3rhrr8mn1ybmc6uf1p7nc2km7ept9urqkg8ppvmh2hpov0zrk4jck0othr3pw4fj3tpl9ny4msowo533m97ufys7kpwtyd2rm9jq2k2bip4ymyknkke0dtaxu6ku57hj1fbkm0jus3rlf451jy24ph3puriq505bit7bf99hew6qdhu6aod6ftuwrovi8g5yln84uvtu71391mxey1fn2cm6qc6gjz77lgg4duuifvty5uba2sw41drsyapw190ylotxq4wna1ockcgu6x9wqmn4xd83pgsr82bi8j4yjyylr39ehtty75cruu2p6hx4siofc1g5cabb16omqn0ome89tn37dzvsd18g7d0i4dbiro7ajvtohny22hklunt1byxtbhh9yjhsqqeq7qipzks2o9fim0vd7fxpjwlw0zcskqfxakmhoihp2yrg3xv2o3gw5eyawvmt6kbfcwbhipq6sj37gvp1dz30vkngv6p4ascu8b6cirhvuv741hk6fmqvzxjrb37xnuxh9izaugytjkfrplq1g7da70aiocbjy1qz8cw12mgty3mzb6c2ryxuo8fuptyzwh5h8sozp7o1yp499th1huktl75mvcuktotnxi8ikli6cfohhuqfhga7pmuao2ple9v39ji8etq79bdcpjlx12jetqpmp9uln294p32nsi9ea77n4a8fmnnp8zbqe4hcjmno2ehb5mnpnyxcjvd7eai6nzprc12klitqsfgllzdw9zlgvrlu7lls7c4vwevwf171pnnfvskm5nqnyqinn756mvi9sq6lhb6k7hudzdxd3d4u68uq7p9gc2x5b426uopln59p2ccp5v7a2a60fhuhx9hmt1ku8s412vxnp4f == \3\d\x\l\o\w\8\8\w\k\s\p\k\o\x\z\z\g\p\8\r\e\e\7\8\v\n\v\x\z\0\x\l\1\k\6\w\3\a\8\a\9\v\l\o\n\0\n\u\g\m\v\n\9\k\u\7\v\1\7\t\n\p\3\e\7\g\6\7\h\l\o\l\b\v\g\s\m\y\c\r\r\d\4\3\w\8\6\i\e\n\w\g\w\d\h\g\p\w\7\4\k\g\4\f\x\a\z\8\3\e\c\u\7\6\g\p\z\9\1\j\c\v\b\v\7\y\z\r\h\f\e\d\b\k\f\0\2\7\w\1\1\3\q\c\4\g\w\9\f\0\u\0\z\g\2\z\g\x\4\1\g\m\7\4\z\l\o\u\j\8\b\7\o\1\i\t\l\g\y\v\7\1\z\3\s\5\2\r\a\8\d\6\h\k\9\v\o\s\q\k\1\x\0\g\3\6\1\l\v\p\e\h\4\k\g\p\6\u\5\w\u\r\o\c\x\c\j\c\l\p\6\q\0\2\e\i\2\b\m\p\y\7\v\k\n\e\d\m\i\2\x\g\9\p\c\b\v\t\p\s\c\s\b\8\d\5\e\4\o\s\w\c\l\0\h\5\j\d\1\a\w\d\4\3\n\7\k\4\q\m\l\l\s\0\i\e\8\w\4\6\3\l\x\y\n\j\8\q\y\5\g\u\o\5\2\t\t\g\j\r\n\o\i\b\d\3\p\g\5\3\8\e\m\0\u\u\5\t\o\r\0\9\h\c\2\f\7\m\o\1\n\0\4\5\a\l\h\1\u\z\w\c\z\1\c\i\7\1\0\d\a\a\4\u\y\5\q\0\q\5\p\n\p\i\i\f\0\m\0\1\d\e\6\x\8\e\u\0\a\k\f\b\p\p\x\9\2\9\s\g\v\p\c\5\9\3\b\0\z\i\s\t\w\f\q\b\j\z\d\a\0\u\f\m\3\c\y\4\6\b\q\0\8\r\m\0\h\2\q\1\y\k\7\5\h\j\q\n\6\q\u\d\p\u\8\p\2\7\r\1\0\z\0\8\l\z\c\y\t\d\5\x\i\n\m\4\u\r\1\b\5\n\z\5\3\5\p\u\r\h\3\q\4\a\4\0\m\d\i\8\n\k\n\x\i\9\j\8\4\g\s\m\9\9\q\v\v\i\m\k\n\x\v\8\i\x\q\0\q\i\8\h\g\3\w\d\0\h\u\y\6\w\j\k\h\h\6\u\0\n\9\t\o\c\t\y\c\a\9\v\t\w\5\z\l\q\0\0\u\8\f\3\q\w\v\i\m\u\g\0\t\i\r\8\v\a\v\8\i\p\f\j\w\i\5\4\p\h\p\q\p\x\k\7\e\9\6\q\v\b\f\r\a\p\j\a\3\2\3\j\u\2\o\5\e\1\c\o\o\m\m\d\m\f\u\7\w\7\t\r\2\g\7\8\s\j\8\o\g\d\t\u\w\0\s\w\4\e\w\9\8\w\t\0\q\e\2\4\a\l\i\r\p\l\5\s\t\f\9\i\f\5\s\l\u\3\v\9\q\4\6\r\e\j\s\k\8\7\h\6\d\8\p\a\5\f\e\f\4\r\b\q\t\4\h\v\o\b\n\j\c\c\z\x\v\9\g\1\p\o\h\8\o\j\4\v\m\v\u\z\s\s\q\j\7\1\j\d\o\o\v\w\e\s\6\v\j\e\6\5\l\4\n\c\9\p\y\h\h\h\e\d\m\h\5\0\7\8\r\4\d\u\7\u\v\w\m\j\5\t\n\r\h\p\f\q\t\m\j\a\3\i\d\r\9\n\h\7\n\i\o\i\a\u\6\2\b\s\s\c\1\c\o\6\l\q\7\i\1\n\7\o\8\m\l\t\o\e\2\l\f\r\y\c\g\z\d\s\w\6\m\r\o\q\n\g\c\j\9\e\j\d\t\l\o\i\r\u\w\u\m\9\4\3\q\i\i\d\5\i\7\a\c\a\2\l\w\w\2\7\b\k\b\k\c\d\9\m\7\o\8\9\8\o\o\s\r\5\2\c\a\t\u\d\o\x\0\l\k\t\h\8\4\b\z\p\4\i\z\d\o\i\6\h\j\4\p\t\4\r\x\j\6\m\0\u\s\l\t\0\y\j\1\v\4\w\i\j\s\y\0\0\r\w\j\w\z\p\i\b\x\t\h\u\k\i\n\8\b\c\8\x\u\q\g\a\g\f\5\0\t\2\9\c\q\y\m\y\r\t\m\4\r\n\8\7\t\s\i\j\5\h\w\2\f\z\4\l\v\h\z\j\o\4\b\a\7\t\x\t\0\x\f\j\i\m\a\f\0\z\7\z\9\l\8\2\9\2\e\v\l\c\t\d\c\c\3\4\0\g\q\y\j\0\t\7\k\j\2\g\3\k\s\8\p\v\x\3\d\5\e\w\j\6\u\0\f\k\4\b\m\8\3\4\z\9\b\t\f\k\j\z\9\m\d\g\4\t\g\l\5\1\1\1\b\e\y\v\f\1\s\t\u\a\y\t\7\6\0\n\y\z\7\y\j\b\3\j\y\4\y\n\u\4\z\2\8\d\v\7\9\t\u\y\c\7\p\l\v\o\9\q\a\2\d\8\s\e\f\b\c\g\7\4\t\o\8\5\i\s\t\w\n\4\u\2\n\1\c\s\n\7\j\5\p\c\e\6\o\h\x\t\y\w\x\7\4\4\d\q\b\d\1\a\p\n\q\7\s\n\5\2\q\3\o\p\v\r\1\4\t\w\l\q\u\5\d\y\c\z\4\e\l\k\a\d\x\x\g\c\h\3\9\w\e\1\q\7\q\e\f\b\r\1\h\1\6\y\m\k\t\b\t\6\d\c\j\l\f\t\2\f\5\u\h\z\l\7\v\g\x\w\d\7\d\y\w\x\2\9\a\x\3\r\e\6\f\e\y\e\o\4\b\r\f\w\p\t\s\t\k\z\s\3\j\7\8\l\p\w\k\b\q\s\g\6\r\8\z\p\b\4\c\m\f\c\b\x\t\g\0\k\2\c\d\v\p\v\0\8\c\i\q\c\d\0\s\6\8\n\n\6\c\l\x\o\d\1\d\2\7\z\w\n\u\j\r\1\i\d\c\w\5\i\t\v\u\5\j\4\8\3\3\2\8\i\n\s\f\4\z\a\z\o\j\e\g\e\6\1\t\s\3\6\w\g\5\h\v\5\m\s\t\v\y\u\z\9\d\x\g\r\e\y\p\0\c\x\s\9\5\n\o\x\e\9\5\7\m\5\3\z\w\8\d\q\d\s\1\4\v\s\6\u\c\w\4\q\u\x\m\m\a\2\c\y\m\r\c\e\9\i\p\q\k\8\a\9\1\s\p\2\e\r\v\3\5\g\b\9\i\1\e\y\9\0\k\l\n\o\u\5\y\m\v\l\e\0\o\r\b\m\0\a\u\e\a\z\o\k\5\u\1\3\0\b\3\x\o\6\x\v\e\v\j\i\u\b\q\y\5\8\g\c\a\j\4\o\s\c\z\8\t\o\2\n\6\4\i\x\b\v\z\4\y\f\8\s\t\r\g\u\j\f\j\q\j\9\d\r\n\q\1\p\b\s\d\l\x\t\m\9\n\4\m\i\u\o\u\w\n\z\4\s\m\f\a\2\b\s\y\z\b\1\0\i\8\p\0\x\1\7\6\0\j\o\t\e\k\i\2\g\b\x\s\v\n\x\o\b\e\m\l\7\k\n\2\7\3\2\k\0\5\h\p\n\1\0\f\8\y\8\8\d\a\m\8\h\w\e\t\1\s\b\e\2\t\l\t\n\l\r\h\i\i\q\3\l\v\2\6\f\5\p\w\t\y\l\3\v\f\m\r\a\p\t\p\6\z\y\y\f\w\i\2\s\v\b\z\o\u\u\d\p\l\q\j\y\c\9\w\e\z\3\b\m\p\b\r\s\7\f\9\q\k\p\d\g\d\s\i\l\8\o\e\e\q\c\q\x\2\k\q\z\5\7\w\p\d\1\t\4\g\4\e\1\2\a\p\2\p\e\d\4\5\d\0\b\m\2\u\r\g\9\l\s\r\j\l\7\m\i\k\u\x\q\t\y\9\6\x\4\n\q\9\x\7\n\t\c\7\8\3\k\j\6\h\7\z\a\a\g\n\e\e\a\5\z\3\h\0\s\r\0\u\v\7\8\o\w\7\8\z\2\5\o\a\g\7\l\z\q\1\9\r\b\m\q\2\r\k\c\e\3\m\6\a\w\w\a\c\g\r\d\q\8\0\w\9\6\n\8\c\0\s\m\5\y\c\h\0\b\9\m\5\7\5\g\f\5\2\5\x\t\x\7\s\b\6\w\y\3\l\t\0\b\e\u\6\c\c\g\t\9\s\g\9\g\b\u\m\3\5\u\p\n\d\l\0\h\e\x\u\s\j\n\a\s\z\g\6\9\7\3\o\u\h\d\l\t\s\n\c\0\e\7\6\j\7\r\7\i\q\h\w\o\v\b\5\l\b\w\i\2\r\n\6\4\y\t\c\b\3\5\7\u\v\y\z\r\o\q\r\g\g\v\6\x\u\k\p\k\k\b\r\c\9\k\4\m\v\k\l\4\5\n\k\o\7\z\f\v\o\o\p\x\7\x\c\w\3\s\1\y\y\z\v\n\n\o\b\h\d\8\q\d\y\o\a\2\c\2\t\e\2\9\0\g\e\r\x\i\o\q\d\p\d\h\g\f\l\2\2\0\8\w\v\w\u\c\3\d\k\s\z\b\l\n\9\j\n\m\2\t\5\6\v\v\w\5\q\a\v\z\x\5\c\v\q\f\w\c\8\e\q\c\s\5\w\7\g\k\0\y\g\u\l\j\e\9\d\k\1\n\y\q\f\a\6\i\k\v\4\1\8\s\m\1\u\o\b\l\f\8\f\9\r\l\u\s\1\n\x\b\a\z\r\p\s\n\v\n\f\9\l\9\o\x\w\b\i\5\t\t\l\d\f\r\m\r\q\y\7\w\m\j\1\p\v\u\h\2\p\f\x\0\m\u\9\8\j\4\p\p\1\0\u\g\f\9\k\9\j\o\q\2\y\j\s\5\5\d\5\i\d\p\g\j\g\q\s\r\k\k\q\c\x\0\m\o\0\v\z\e\w\6\j\k\b\4\c\i\a\i\d\k\o\6\e\t\s\3\a\g\d\7\k\p\g\4\c\5\7\3\2\s\7\s\u\6\n\8\4\r\8\k\m\7\t\2\u\y\d\y\s\7\o\a\j\f\s\b\y\e\q\b\q\2\3\t\i\r\n\t\5\l\e\5\j\c\b\b\2\t\p\i\i\y\3\v\8\1\1\1\s\u\0\k\n\9\i\m\p\p\l\j\w\a\4\k\7\b\z\d\s\j\f\0\l\l\s\l\s\n\1\n\b\e\y\i\u\y\t\l\z\r\p\3\6\9\p\l\c\o\d\n\s\i\w\x\l\5\4\g\j\p\y\u\m\5\r\l\i\b\v\g\f\3\w\l\0\b\a\q\x\2\e\f\d\6\n\b\z\0\x\c\g\d\o\k\g\l\r\e\d\r\s\9\9\v\z\v\s\7\z\p\m\j\j\o\3\a\6\a\8\j\b\x\5\1\m\u\p\c\6\9\n\b\o\n\t\c\s\9\3\b\v\c\a\o\o\y\u\7\8\7\3\k\j\7\c\v\d\0\l\0\o\y\u\k\z\9\x\6\m\g\0\4\b\c\w\s\b\5\m\w\u\1\j\2\f\x\p\c\z\2\7\w\n\2\l\5\3\4\3\8\g\4\s\0\2\4\z\v\p\e\1\5\k\a\f\o\z\s\z\n\b\q\b\m\u\s\4\u\b\z\r\h\a\0\3\p\b\3\b\k\q\r\h\g\x\s\q\k\1\2\x\g\3\4\x\n\w\w\e\5\a\t\9\y\6\2\v\e\4\8\2\8\6\a\c\4\s\4\j\4\9\o\4\y\t\1\t\l\o\2\h\u\h\5\9\3\g\4\2\h\o\h\8\e\e\f\n\v\5\f\t\c\g\r\t\m\m\5\2\t\w\h\s\v\f\9\g\t\2\s\1\l\8\h\a\n\e\g\0\m\n\g\g\5\h\h\i\8\g\y\u\9\n\0\6\d\m\k\i\t\9\f\a\1\g\1\1\y\r\4\v\0\w\l\6\7\q\a\8\9\7\p\h\x\z\p\a\x\6\2\m\v\8\t\o\x\8\c\t\n\u\l\7\5\i\2\9\2\h\l\6\2\h\n\e\y\g\c\0\v\o\x\l\n\s\y\n\3\r\k\9\m\3\g\6\j\w\9\1\5\e\p\9\c\n\1\s\s\y\u\i\x\x\9\s\p\8\g\w\p\r\4\5\n\e\w\2\m\6\c\q\g\h\4\0\z\p\o\o\g\r\o\s\y\u\c\k\6\n\z\5\x\a\z\6\c\d\l\7\g\0\q\h\s\c\r\t\0\0\d\d\q\4\r\n\w\d\z\b\9\6\t\q\0\7\r\k\h\z\8\r\6\1\6\m\a\f\x\2\g\4\j\6\r\u\q\t\f\n\2\k\p\k\z\z\g\s\3\8\9\u\g\7\r\8\m\a\u\j\m\4\b\g\f\s\n\k\o\c\6\g\0\b\z\s\q\6\r\y\2\g\c\r\h\v\6\s\r\q\9\o\9\e\a\v\j\d\s\f\w\8\z\5\r\d\y\j\t\6\m\z\d\r\i\3\s\v\6\s\p\e\l\a\7\r\j\n\f\y\c\g\q\7\9\1\t\4\4\y\d\w\3\k\1\0\n\7\6\z\7\u\b\c\u\t\9\x\s\n\g\r\7\k\2\e\f\2\0\k\p\j\6\1\e\v\m\9\f\r\i\n\b\s\i\i\z\o\b\d\r\9\2\3\z\z\6\n\d\j\x\m\0\r\c\6\q\i\n\q\q\i\y\t\j\x\p\i\h\f\8\v\h\v\z\i\2\x\l\j\c\k\k\o\2\s\6\r\k\6\l\2\u\g\p\q\v\f\o\i\r\o\r\a\p\l\g\4\s\p\g\9\1\y\4\7\k\j\s\z\2\4\z\3\1\y\a\9\e\1\6\h\x\8\r\w\f\z\q\2\s\h\i\6\n\t\a\b\g\8\b\g\k\3\r\h\r\r\8\m\n\1\y\b\m\c\6\u\f\1\p\7\n\c\2\k\m\7\e\p\t\9\u\r\q\k\g\8\p\p\v\m\h\2\h\p\o\v\0\z\r\k\4\j\c\k\0\o\t\h\r\3\p\w\4\f\j\3\t\p\l\9\n\y\4\m\s\o\w\o\5\3\3\m\9\7\u\f\y\s\7\k\p\w\t\y\d\2\r\m\9\j\q\2\k\2\b\i\p\4\y\m\y\k\n\k\k\e\0\d\t\a\x\u\6\k\u\5\7\h\j\1\f\b\k\m\0\j\u\s\3\r\l\f\4\5\1\j\y\2\4\p\h\3\p\u\r\i\q\5\0\5\b\i\t\7\b\f\9\9\h\e\w\6\q\d\h\u\6\a\o\d\6\f\t\u\w\r\o\v\i\8\g\5\y\l\n\8\4\u\v\t\u\7\1\3\9\1\m\x\e\y\1\f\n\2\c\m\6\q\c\6\g\j\z\7\7\l\g\g\4\d\u\u\i\f\v\t\y\5\u\b\a\2\s\w\4\1\d\r\s\y\a\p\w\1\9\0\y\l\o\t\x\q\4\w\n\a\1\o\c\k\c\g\u\6\x\9\w\q\m\n\4\x\d\8\3\p\g\s\r\8\2\b\i\8\j\4\y\j\y\y\l\r\3\9\e\h\t\t\y\7\5\c\r\u\u\2\p\6\h\x\4\s\i\o\f\c\1\g\5\c\a\b\b\1\6\o\m\q\n\0\o\m\e\8\9\t\n\3\7\d\z\v\s\d\1\8\g\7\d\0\i\4\d\b\i\r\o\7\a\j\v\t\o\h\n\y\2\2\h\k\l\u\n\t\1\b\y\x\t\b\h\h\9\y\j\h\s\q\q\e\q\7\q\i\p\z\k\s\2\o\9\f\i\m\0\v\d\7\f\x\p\j\w\l\w\0\z\c\s\k\q\f\x\a\k\m\h\o\i\h\p\2\y\r\g\3\x\v\2\o\3\g\w\5\e\y\a\w\v\m\t\6\k\b\f\c\w\b\h\i\p\q\6\s\j\3\7\g\v\p\1\d\z\3\0\v\k\n\g\v\6\p\4\a\s\c\u\8\b\6\c\i\r\h\v\u\v\7\4\1\h\k\6\f\m\q\v\z\x\j\r\b\3\7\x\n\u\x\h\9\i\z\a\u\g\y\t\j\k\f\r\p\l\q\1\g\7\d\a\7\0\a\i\o\c\b\j\y\1\q\z\8\c\w\1\2\m\g\t\y\3\m\z\b\6\c\2\r\y\x\u\o\8\f\u\p\t\y\z\w\h\5\h\8\s\o\z\p\7\o\1\y\p\4\9\9\t\h\1\h\u\k\t\l\7\5\m\v\c\u\k\t\o\t\n\x\i\8\i\k\l\i\6\c\f\o\h\h\u\q\f\h\g\a\7\p\m\u\a\o\2\p\l\e\9\v\3\9\j\i\8\e\t\q\7\9\b\d\c\p\j\l\x\1\2\j\e\t\q\p\m\p\9\u\l\n\2\9\4\p\3\2\n\s\i\9\e\a\7\7\n\4\a\8\f\m\n\n\p\8\z\b\q\e\4\h\c\j\m\n\o\2\e\h\b\5\m\n\p\n\y\x\c\j\v\d\7\e\a\i\6\n\z\p\r\c\1\2\k\l\i\t\q\s\f\g\l\l\z\d\w\9\z\l\g\v\r\l\u\7\l\l\s\7\c\4\v\w\e\v\w\f\1\7\1\p\n\n\f\v\s\k\m\5\n\q\n\y\q\i\n\n\7\5\6\m\v\i\9\s\q\6\l\h\b\6\k\7\h\u\d\z\d\x\d\3\d\4\u\6\8\u\q\7\p\9\g\c\2\x\5\b\4\2\6\u\o\p\l\n\5\9\p\2\c\c\p\5\v\7\a\2\a\6\0\f\h\u\h\x\9\h\m\t\1\k\u\8\s\4\1\2\v\x\n\p\4\f ]]
00:13:03.687  ************************************
00:13:03.687  END TEST dd_rw_offset
00:13:03.687  ************************************
00:13:03.687  
00:13:03.687  real	0m1.398s
00:13:03.687  user	0m0.810s
00:13:03.687  sys	0m0.429s
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref=
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1
00:13:03.687   05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62
00:13:03.687    05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf
00:13:03.687    05:03:17 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable
00:13:03.687    05:03:17 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:13:03.687  {
00:13:03.687    "subsystems": [
00:13:03.687      {
00:13:03.687        "subsystem": "bdev",
00:13:03.687        "config": [
00:13:03.687          {
00:13:03.687            "params": {
00:13:03.687              "trtype": "pcie",
00:13:03.687              "traddr": "0000:00:10.0",
00:13:03.687              "name": "Nvme0"
00:13:03.687            },
00:13:03.687            "method": "bdev_nvme_attach_controller"
00:13:03.687          },
00:13:03.687          {
00:13:03.687            "method": "bdev_wait_for_examine"
00:13:03.687          }
00:13:03.687        ]
00:13:03.687      }
00:13:03.687    ]
00:13:03.687  }
00:13:03.687  [2024-11-20 05:03:17.604250] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:03.687  [2024-11-20 05:03:17.604525] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid130970 ]
00:13:03.947  [2024-11-20 05:03:17.756085] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:03.947  [2024-11-20 05:03:17.778185] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:03.947  [2024-11-20 05:03:17.815050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:04.205  
[2024-11-20T05:03:18.421Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:13:04.464  
00:13:04.464   05:03:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:04.464  
00:13:04.464  real	0m18.840s
00:13:04.464  user	0m12.030s
00:13:04.464  sys	0m4.907s
00:13:04.464   05:03:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:04.464   05:03:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x
00:13:04.464  ************************************
00:13:04.464  END TEST spdk_dd_basic_rw
00:13:04.464  ************************************
00:13:04.464   05:03:18 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:13:04.464   05:03:18 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:04.464   05:03:18 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.464   05:03:18 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:04.464  ************************************
00:13:04.464  START TEST spdk_dd_posix
00:13:04.464  ************************************
00:13:04.464   05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh
00:13:04.464  * Looking for test storage...
00:13:04.464  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:04.464     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:04.464      05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:04.464      05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-:
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-:
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<'
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:04.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.724  		--rc genhtml_branch_coverage=1
00:13:04.724  		--rc genhtml_function_coverage=1
00:13:04.724  		--rc genhtml_legend=1
00:13:04.724  		--rc geninfo_all_blocks=1
00:13:04.724  		--rc geninfo_unexecuted_blocks=1
00:13:04.724  		
00:13:04.724  		'
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:04.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.724  		--rc genhtml_branch_coverage=1
00:13:04.724  		--rc genhtml_function_coverage=1
00:13:04.724  		--rc genhtml_legend=1
00:13:04.724  		--rc geninfo_all_blocks=1
00:13:04.724  		--rc geninfo_unexecuted_blocks=1
00:13:04.724  		
00:13:04.724  		'
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:04.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.724  		--rc genhtml_branch_coverage=1
00:13:04.724  		--rc genhtml_function_coverage=1
00:13:04.724  		--rc genhtml_legend=1
00:13:04.724  		--rc geninfo_all_blocks=1
00:13:04.724  		--rc geninfo_unexecuted_blocks=1
00:13:04.724  		
00:13:04.724  		'
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:04.724  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:04.724  		--rc genhtml_branch_coverage=1
00:13:04.724  		--rc genhtml_function_coverage=1
00:13:04.724  		--rc genhtml_legend=1
00:13:04.724  		--rc geninfo_all_blocks=1
00:13:04.724  		--rc geninfo_unexecuted_blocks=1
00:13:04.724  		
00:13:04.724  		'
00:13:04.724    05:03:18 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:04.724     05:03:18 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH
00:13:04.724      05:03:18 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO'
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use'
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO'
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:04.724   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', using AIO'
00:13:04.725  * First test run, using AIO
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:04.725  ************************************
00:13:04.725  START TEST dd_flag_append
00:13:04.725  ************************************
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=a81s7lkchqfczysqnzs9ewml2riuop6e
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable
00:13:04.725    05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=jb5qbd4l70y82acxd4gjjg9jnaiq8btu
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s a81s7lkchqfczysqnzs9ewml2riuop6e
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s jb5qbd4l70y82acxd4gjjg9jnaiq8btu
00:13:04.725   05:03:18 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:13:04.725  [2024-11-20 05:03:18.542940] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:04.725  [2024-11-20 05:03:18.543849] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131053 ]
00:13:04.984  [2024-11-20 05:03:18.696084] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:04.984  [2024-11-20 05:03:18.715106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:04.984  [2024-11-20 05:03:18.749681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:04.984  
[2024-11-20T05:03:19.200Z] Copying: 32/32 [B] (average 31 kBps)
00:13:05.243  
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ jb5qbd4l70y82acxd4gjjg9jnaiq8btua81s7lkchqfczysqnzs9ewml2riuop6e == \j\b\5\q\b\d\4\l\7\0\y\8\2\a\c\x\d\4\g\j\j\g\9\j\n\a\i\q\8\b\t\u\a\8\1\s\7\l\k\c\h\q\f\c\z\y\s\q\n\z\s\9\e\w\m\l\2\r\i\u\o\p\6\e ]]
00:13:05.243  
00:13:05.243  real	0m0.628s
00:13:05.243  user	0m0.290s
00:13:05.243  sys	0m0.189s
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x
00:13:05.243  ************************************
00:13:05.243  END TEST dd_flag_append
00:13:05.243  ************************************
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:05.243  ************************************
00:13:05.243  START TEST dd_flag_directory
00:13:05.243  ************************************
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:05.243    05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:05.243    05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:05.243   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:05.501  [2024-11-20 05:03:19.209972] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:05.501  [2024-11-20 05:03:19.210300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131080 ]
00:13:05.501  [2024-11-20 05:03:19.360726] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:05.501  [2024-11-20 05:03:19.383052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:05.501  [2024-11-20 05:03:19.424814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:05.760  [2024-11-20 05:03:19.513256] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:05.760  [2024-11-20 05:03:19.513634] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:05.760  [2024-11-20 05:03:19.513728] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:05.760  [2024-11-20 05:03:19.628815] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.019    05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.019    05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.019   05:03:19 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:06.019  [2024-11-20 05:03:19.785021] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:06.019  [2024-11-20 05:03:19.785277] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131101 ]
00:13:06.019  [2024-11-20 05:03:19.935486] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:06.019  [2024-11-20 05:03:19.961609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:06.278  [2024-11-20 05:03:19.999511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:06.278  [2024-11-20 05:03:20.088152] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:06.278  [2024-11-20 05:03:20.088507] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:06.278  [2024-11-20 05:03:20.088591] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:06.278  [2024-11-20 05:03:20.204254] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:06.537  
00:13:06.537  real	0m1.153s
00:13:06.537  user	0m0.566s
00:13:06.537  sys	0m0.385s
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x
00:13:06.537  ************************************
00:13:06.537  END TEST dd_flag_directory
00:13:06.537  ************************************
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:06.537  ************************************
00:13:06.537  START TEST dd_flag_nofollow
00:13:06.537  ************************************
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.537    05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.537    05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:06.537   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:06.537  [2024-11-20 05:03:20.422689] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:06.537  [2024-11-20 05:03:20.423042] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131127 ]
00:13:06.795  [2024-11-20 05:03:20.574545] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:06.795  [2024-11-20 05:03:20.599528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:06.795  [2024-11-20 05:03:20.630587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:06.795  [2024-11-20 05:03:20.709216] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:13:06.795  [2024-11-20 05:03:20.709565] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:13:06.795  [2024-11-20 05:03:20.709650] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:07.055  [2024-11-20 05:03:20.823869] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.055    05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.055    05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:07.055   05:03:20 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:07.055  [2024-11-20 05:03:20.980585] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:07.055  [2024-11-20 05:03:20.980905] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131148 ]
00:13:07.314  [2024-11-20 05:03:21.131443] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:07.314  [2024-11-20 05:03:21.156786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:07.314  [2024-11-20 05:03:21.196312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:07.573  [2024-11-20 05:03:21.287162] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:13:07.573  [2024-11-20 05:03:21.287620] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:13:07.573  [2024-11-20 05:03:21.287802] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:07.573  [2024-11-20 05:03:21.402571] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:13:07.573   05:03:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:07.832  [2024-11-20 05:03:21.563680] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:07.832  [2024-11-20 05:03:21.563986] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131154 ]
00:13:07.832  [2024-11-20 05:03:21.712763] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:07.832  [2024-11-20 05:03:21.738658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:07.832  [2024-11-20 05:03:21.774786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:08.091  
[2024-11-20T05:03:22.307Z] Copying: 512/512 [B] (average 500 kBps)
00:13:08.350  
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ w37wsphujjce9g9x2md007x67tzv9mkkv1f3zmxh9w3n4nscllb0svq7mcd3pfe5a6eqg1ox0eyjos9ejubzwfwlj2xy642d9ybbs5nwrx5a5b31iw7dblodhd1je84yjr4uqpmr1afl4ztrvopq7o4fb5cle4zc1ysx7qtvkn4ugmxi5z16ooyys3rq3n3vq38i4tim5tmrpm98cbh1iumo00u6xe4deipeanl4oim0q5dqtgvq84be0imnzgqnsxdf71hu8nd0s5fagzrawzmh6417aj1yrty2me071if7t6mlcrcspl6lv08a3faiqczsz9ims98xngrgcey5ahv89qtwnhaxc2343aupkd7gfjtbmdtwwewm2k17gad0xwa4fwcdvcuxz3n4v69ekssvm3auoqgnp84c2drt73h8fyvjpt5ib156hupitk9etxwi152l3cho7sm4yybztu532he0xqll57nd6qg40v00xwba74mza7bdkcmxl4b9 == \w\3\7\w\s\p\h\u\j\j\c\e\9\g\9\x\2\m\d\0\0\7\x\6\7\t\z\v\9\m\k\k\v\1\f\3\z\m\x\h\9\w\3\n\4\n\s\c\l\l\b\0\s\v\q\7\m\c\d\3\p\f\e\5\a\6\e\q\g\1\o\x\0\e\y\j\o\s\9\e\j\u\b\z\w\f\w\l\j\2\x\y\6\4\2\d\9\y\b\b\s\5\n\w\r\x\5\a\5\b\3\1\i\w\7\d\b\l\o\d\h\d\1\j\e\8\4\y\j\r\4\u\q\p\m\r\1\a\f\l\4\z\t\r\v\o\p\q\7\o\4\f\b\5\c\l\e\4\z\c\1\y\s\x\7\q\t\v\k\n\4\u\g\m\x\i\5\z\1\6\o\o\y\y\s\3\r\q\3\n\3\v\q\3\8\i\4\t\i\m\5\t\m\r\p\m\9\8\c\b\h\1\i\u\m\o\0\0\u\6\x\e\4\d\e\i\p\e\a\n\l\4\o\i\m\0\q\5\d\q\t\g\v\q\8\4\b\e\0\i\m\n\z\g\q\n\s\x\d\f\7\1\h\u\8\n\d\0\s\5\f\a\g\z\r\a\w\z\m\h\6\4\1\7\a\j\1\y\r\t\y\2\m\e\0\7\1\i\f\7\t\6\m\l\c\r\c\s\p\l\6\l\v\0\8\a\3\f\a\i\q\c\z\s\z\9\i\m\s\9\8\x\n\g\r\g\c\e\y\5\a\h\v\8\9\q\t\w\n\h\a\x\c\2\3\4\3\a\u\p\k\d\7\g\f\j\t\b\m\d\t\w\w\e\w\m\2\k\1\7\g\a\d\0\x\w\a\4\f\w\c\d\v\c\u\x\z\3\n\4\v\6\9\e\k\s\s\v\m\3\a\u\o\q\g\n\p\8\4\c\2\d\r\t\7\3\h\8\f\y\v\j\p\t\5\i\b\1\5\6\h\u\p\i\t\k\9\e\t\x\w\i\1\5\2\l\3\c\h\o\7\s\m\4\y\y\b\z\t\u\5\3\2\h\e\0\x\q\l\l\5\7\n\d\6\q\g\4\0\v\0\0\x\w\b\a\7\4\m\z\a\7\b\d\k\c\m\x\l\4\b\9 ]]
00:13:08.350  
00:13:08.350  real	0m1.761s
00:13:08.350  user	0m0.844s
00:13:08.350  sys	0m0.574s
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:08.350  ************************************
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x
00:13:08.350  END TEST dd_flag_nofollow
00:13:08.350  ************************************
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:08.350  ************************************
00:13:08.350  START TEST dd_flag_noatime
00:13:08.350  ************************************
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable
00:13:08.350   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:13:08.351    05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:08.351   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1732079001
00:13:08.351    05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:08.351   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1732079002
00:13:08.351   05:03:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1
00:13:09.288   05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:09.547  [2024-11-20 05:03:23.263440] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:09.547  [2024-11-20 05:03:23.263754] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131208 ]
00:13:09.547  [2024-11-20 05:03:23.414874] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:09.547  [2024-11-20 05:03:23.438549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:09.547  [2024-11-20 05:03:23.476843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:09.813  
[2024-11-20T05:03:24.030Z] Copying: 512/512 [B] (average 500 kBps)
00:13:10.073  
00:13:10.073    05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:10.073   05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1732079001 ))
00:13:10.073    05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:10.073   05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1732079002 ))
00:13:10.073   05:03:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:10.073  [2024-11-20 05:03:23.889810] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:10.073  [2024-11-20 05:03:23.890105] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131226 ]
00:13:10.332  [2024-11-20 05:03:24.040132] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:10.332  [2024-11-20 05:03:24.065635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:10.332  [2024-11-20 05:03:24.104270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:10.332  
[2024-11-20T05:03:24.548Z] Copying: 512/512 [B] (average 500 kBps)
00:13:10.591  
00:13:10.591    05:03:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1732079004 ))
00:13:10.591  
00:13:10.591  real	0m2.274s
00:13:10.591  user	0m0.561s
00:13:10.591  sys	0m0.440s
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:10.591  ************************************
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x
00:13:10.591  END TEST dd_flag_noatime
00:13:10.591  ************************************
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:10.591   05:03:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:10.591  ************************************
00:13:10.592  START TEST dd_flags_misc
00:13:10.592  ************************************
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:10.592   05:03:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:13:10.850  [2024-11-20 05:03:24.552598] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:10.850  [2024-11-20 05:03:24.552926] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131251 ]
00:13:10.850  [2024-11-20 05:03:24.687337] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:10.850  [2024-11-20 05:03:24.712066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:10.850  [2024-11-20 05:03:24.744432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:11.109  
[2024-11-20T05:03:25.326Z] Copying: 512/512 [B] (average 500 kBps)
00:13:11.369  
00:13:11.369   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2m6cbbd2jpvt640f6aoicaqe7tl7ebnjv2l55ujc75103ajdyam4r16ht928zbiynfn86e474s8up46nnzhdn23gayc3oqe37ozsudg8haczr043p9km02mbkq6w80uyl8zzexlxvpx9clksp0q7l9xkzbrgkzdi4u9iyu0ftaqdzt2f4jxzniuiu6esfdpqqcsidl3higqs1oypopwagtlacznjx4t7b56vbsgm7wigmh02hhgk8bpfz6hfu6beu92mv39gmmje15hg6n3zvmnw4mae2it1uw6njqlok85l7d6yo4j4v6dabfzhfshrwqvlxbicyoroa40jib5xdm802iz1432k9ojigiazdlsmqeo429e1t4rkdpkofs4p3c4a7fp64hc129z3hz5py7w5fvxacrprzq7uysj60lhpdagjhz48nva4qcwxbnsqkm3cy9lfw0t2t9menudqjli7d81yfyi0qhikovqv8usbiut31170hi0ykddxg1jj == \2\m\6\c\b\b\d\2\j\p\v\t\6\4\0\f\6\a\o\i\c\a\q\e\7\t\l\7\e\b\n\j\v\2\l\5\5\u\j\c\7\5\1\0\3\a\j\d\y\a\m\4\r\1\6\h\t\9\2\8\z\b\i\y\n\f\n\8\6\e\4\7\4\s\8\u\p\4\6\n\n\z\h\d\n\2\3\g\a\y\c\3\o\q\e\3\7\o\z\s\u\d\g\8\h\a\c\z\r\0\4\3\p\9\k\m\0\2\m\b\k\q\6\w\8\0\u\y\l\8\z\z\e\x\l\x\v\p\x\9\c\l\k\s\p\0\q\7\l\9\x\k\z\b\r\g\k\z\d\i\4\u\9\i\y\u\0\f\t\a\q\d\z\t\2\f\4\j\x\z\n\i\u\i\u\6\e\s\f\d\p\q\q\c\s\i\d\l\3\h\i\g\q\s\1\o\y\p\o\p\w\a\g\t\l\a\c\z\n\j\x\4\t\7\b\5\6\v\b\s\g\m\7\w\i\g\m\h\0\2\h\h\g\k\8\b\p\f\z\6\h\f\u\6\b\e\u\9\2\m\v\3\9\g\m\m\j\e\1\5\h\g\6\n\3\z\v\m\n\w\4\m\a\e\2\i\t\1\u\w\6\n\j\q\l\o\k\8\5\l\7\d\6\y\o\4\j\4\v\6\d\a\b\f\z\h\f\s\h\r\w\q\v\l\x\b\i\c\y\o\r\o\a\4\0\j\i\b\5\x\d\m\8\0\2\i\z\1\4\3\2\k\9\o\j\i\g\i\a\z\d\l\s\m\q\e\o\4\2\9\e\1\t\4\r\k\d\p\k\o\f\s\4\p\3\c\4\a\7\f\p\6\4\h\c\1\2\9\z\3\h\z\5\p\y\7\w\5\f\v\x\a\c\r\p\r\z\q\7\u\y\s\j\6\0\l\h\p\d\a\g\j\h\z\4\8\n\v\a\4\q\c\w\x\b\n\s\q\k\m\3\c\y\9\l\f\w\0\t\2\t\9\m\e\n\u\d\q\j\l\i\7\d\8\1\y\f\y\i\0\q\h\i\k\o\v\q\v\8\u\s\b\i\u\t\3\1\1\7\0\h\i\0\y\k\d\d\x\g\1\j\j ]]
00:13:11.369   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:11.369   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:13:11.369  [2024-11-20 05:03:25.143110] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:11.369  [2024-11-20 05:03:25.143663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131272 ]
00:13:11.369  [2024-11-20 05:03:25.293774] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:11.369  [2024-11-20 05:03:25.318363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:11.628  [2024-11-20 05:03:25.356812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:11.628  
[2024-11-20T05:03:25.843Z] Copying: 512/512 [B] (average 500 kBps)
00:13:11.886  
00:13:11.886   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2m6cbbd2jpvt640f6aoicaqe7tl7ebnjv2l55ujc75103ajdyam4r16ht928zbiynfn86e474s8up46nnzhdn23gayc3oqe37ozsudg8haczr043p9km02mbkq6w80uyl8zzexlxvpx9clksp0q7l9xkzbrgkzdi4u9iyu0ftaqdzt2f4jxzniuiu6esfdpqqcsidl3higqs1oypopwagtlacznjx4t7b56vbsgm7wigmh02hhgk8bpfz6hfu6beu92mv39gmmje15hg6n3zvmnw4mae2it1uw6njqlok85l7d6yo4j4v6dabfzhfshrwqvlxbicyoroa40jib5xdm802iz1432k9ojigiazdlsmqeo429e1t4rkdpkofs4p3c4a7fp64hc129z3hz5py7w5fvxacrprzq7uysj60lhpdagjhz48nva4qcwxbnsqkm3cy9lfw0t2t9menudqjli7d81yfyi0qhikovqv8usbiut31170hi0ykddxg1jj == \2\m\6\c\b\b\d\2\j\p\v\t\6\4\0\f\6\a\o\i\c\a\q\e\7\t\l\7\e\b\n\j\v\2\l\5\5\u\j\c\7\5\1\0\3\a\j\d\y\a\m\4\r\1\6\h\t\9\2\8\z\b\i\y\n\f\n\8\6\e\4\7\4\s\8\u\p\4\6\n\n\z\h\d\n\2\3\g\a\y\c\3\o\q\e\3\7\o\z\s\u\d\g\8\h\a\c\z\r\0\4\3\p\9\k\m\0\2\m\b\k\q\6\w\8\0\u\y\l\8\z\z\e\x\l\x\v\p\x\9\c\l\k\s\p\0\q\7\l\9\x\k\z\b\r\g\k\z\d\i\4\u\9\i\y\u\0\f\t\a\q\d\z\t\2\f\4\j\x\z\n\i\u\i\u\6\e\s\f\d\p\q\q\c\s\i\d\l\3\h\i\g\q\s\1\o\y\p\o\p\w\a\g\t\l\a\c\z\n\j\x\4\t\7\b\5\6\v\b\s\g\m\7\w\i\g\m\h\0\2\h\h\g\k\8\b\p\f\z\6\h\f\u\6\b\e\u\9\2\m\v\3\9\g\m\m\j\e\1\5\h\g\6\n\3\z\v\m\n\w\4\m\a\e\2\i\t\1\u\w\6\n\j\q\l\o\k\8\5\l\7\d\6\y\o\4\j\4\v\6\d\a\b\f\z\h\f\s\h\r\w\q\v\l\x\b\i\c\y\o\r\o\a\4\0\j\i\b\5\x\d\m\8\0\2\i\z\1\4\3\2\k\9\o\j\i\g\i\a\z\d\l\s\m\q\e\o\4\2\9\e\1\t\4\r\k\d\p\k\o\f\s\4\p\3\c\4\a\7\f\p\6\4\h\c\1\2\9\z\3\h\z\5\p\y\7\w\5\f\v\x\a\c\r\p\r\z\q\7\u\y\s\j\6\0\l\h\p\d\a\g\j\h\z\4\8\n\v\a\4\q\c\w\x\b\n\s\q\k\m\3\c\y\9\l\f\w\0\t\2\t\9\m\e\n\u\d\q\j\l\i\7\d\8\1\y\f\y\i\0\q\h\i\k\o\v\q\v\8\u\s\b\i\u\t\3\1\1\7\0\h\i\0\y\k\d\d\x\g\1\j\j ]]
00:13:11.886   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:11.886   05:03:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:13:11.886  [2024-11-20 05:03:25.761191] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:11.886  [2024-11-20 05:03:25.761694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131284 ]
00:13:12.145  [2024-11-20 05:03:25.911680] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:12.145  [2024-11-20 05:03:25.938481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:12.145  [2024-11-20 05:03:25.975113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:12.145  
[2024-11-20T05:03:26.361Z] Copying: 512/512 [B] (average 166 kBps)
00:13:12.404  
00:13:12.404   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2m6cbbd2jpvt640f6aoicaqe7tl7ebnjv2l55ujc75103ajdyam4r16ht928zbiynfn86e474s8up46nnzhdn23gayc3oqe37ozsudg8haczr043p9km02mbkq6w80uyl8zzexlxvpx9clksp0q7l9xkzbrgkzdi4u9iyu0ftaqdzt2f4jxzniuiu6esfdpqqcsidl3higqs1oypopwagtlacznjx4t7b56vbsgm7wigmh02hhgk8bpfz6hfu6beu92mv39gmmje15hg6n3zvmnw4mae2it1uw6njqlok85l7d6yo4j4v6dabfzhfshrwqvlxbicyoroa40jib5xdm802iz1432k9ojigiazdlsmqeo429e1t4rkdpkofs4p3c4a7fp64hc129z3hz5py7w5fvxacrprzq7uysj60lhpdagjhz48nva4qcwxbnsqkm3cy9lfw0t2t9menudqjli7d81yfyi0qhikovqv8usbiut31170hi0ykddxg1jj == \2\m\6\c\b\b\d\2\j\p\v\t\6\4\0\f\6\a\o\i\c\a\q\e\7\t\l\7\e\b\n\j\v\2\l\5\5\u\j\c\7\5\1\0\3\a\j\d\y\a\m\4\r\1\6\h\t\9\2\8\z\b\i\y\n\f\n\8\6\e\4\7\4\s\8\u\p\4\6\n\n\z\h\d\n\2\3\g\a\y\c\3\o\q\e\3\7\o\z\s\u\d\g\8\h\a\c\z\r\0\4\3\p\9\k\m\0\2\m\b\k\q\6\w\8\0\u\y\l\8\z\z\e\x\l\x\v\p\x\9\c\l\k\s\p\0\q\7\l\9\x\k\z\b\r\g\k\z\d\i\4\u\9\i\y\u\0\f\t\a\q\d\z\t\2\f\4\j\x\z\n\i\u\i\u\6\e\s\f\d\p\q\q\c\s\i\d\l\3\h\i\g\q\s\1\o\y\p\o\p\w\a\g\t\l\a\c\z\n\j\x\4\t\7\b\5\6\v\b\s\g\m\7\w\i\g\m\h\0\2\h\h\g\k\8\b\p\f\z\6\h\f\u\6\b\e\u\9\2\m\v\3\9\g\m\m\j\e\1\5\h\g\6\n\3\z\v\m\n\w\4\m\a\e\2\i\t\1\u\w\6\n\j\q\l\o\k\8\5\l\7\d\6\y\o\4\j\4\v\6\d\a\b\f\z\h\f\s\h\r\w\q\v\l\x\b\i\c\y\o\r\o\a\4\0\j\i\b\5\x\d\m\8\0\2\i\z\1\4\3\2\k\9\o\j\i\g\i\a\z\d\l\s\m\q\e\o\4\2\9\e\1\t\4\r\k\d\p\k\o\f\s\4\p\3\c\4\a\7\f\p\6\4\h\c\1\2\9\z\3\h\z\5\p\y\7\w\5\f\v\x\a\c\r\p\r\z\q\7\u\y\s\j\6\0\l\h\p\d\a\g\j\h\z\4\8\n\v\a\4\q\c\w\x\b\n\s\q\k\m\3\c\y\9\l\f\w\0\t\2\t\9\m\e\n\u\d\q\j\l\i\7\d\8\1\y\f\y\i\0\q\h\i\k\o\v\q\v\8\u\s\b\i\u\t\3\1\1\7\0\h\i\0\y\k\d\d\x\g\1\j\j ]]
00:13:12.404   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:12.404   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:13:12.663  [2024-11-20 05:03:26.395585] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:12.663  [2024-11-20 05:03:26.396059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131294 ]
00:13:12.663  [2024-11-20 05:03:26.544625] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:12.663  [2024-11-20 05:03:26.563855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:12.663  [2024-11-20 05:03:26.600380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:12.922  
[2024-11-20T05:03:27.138Z] Copying: 512/512 [B] (average 250 kBps)
00:13:13.181  
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 2m6cbbd2jpvt640f6aoicaqe7tl7ebnjv2l55ujc75103ajdyam4r16ht928zbiynfn86e474s8up46nnzhdn23gayc3oqe37ozsudg8haczr043p9km02mbkq6w80uyl8zzexlxvpx9clksp0q7l9xkzbrgkzdi4u9iyu0ftaqdzt2f4jxzniuiu6esfdpqqcsidl3higqs1oypopwagtlacznjx4t7b56vbsgm7wigmh02hhgk8bpfz6hfu6beu92mv39gmmje15hg6n3zvmnw4mae2it1uw6njqlok85l7d6yo4j4v6dabfzhfshrwqvlxbicyoroa40jib5xdm802iz1432k9ojigiazdlsmqeo429e1t4rkdpkofs4p3c4a7fp64hc129z3hz5py7w5fvxacrprzq7uysj60lhpdagjhz48nva4qcwxbnsqkm3cy9lfw0t2t9menudqjli7d81yfyi0qhikovqv8usbiut31170hi0ykddxg1jj == \2\m\6\c\b\b\d\2\j\p\v\t\6\4\0\f\6\a\o\i\c\a\q\e\7\t\l\7\e\b\n\j\v\2\l\5\5\u\j\c\7\5\1\0\3\a\j\d\y\a\m\4\r\1\6\h\t\9\2\8\z\b\i\y\n\f\n\8\6\e\4\7\4\s\8\u\p\4\6\n\n\z\h\d\n\2\3\g\a\y\c\3\o\q\e\3\7\o\z\s\u\d\g\8\h\a\c\z\r\0\4\3\p\9\k\m\0\2\m\b\k\q\6\w\8\0\u\y\l\8\z\z\e\x\l\x\v\p\x\9\c\l\k\s\p\0\q\7\l\9\x\k\z\b\r\g\k\z\d\i\4\u\9\i\y\u\0\f\t\a\q\d\z\t\2\f\4\j\x\z\n\i\u\i\u\6\e\s\f\d\p\q\q\c\s\i\d\l\3\h\i\g\q\s\1\o\y\p\o\p\w\a\g\t\l\a\c\z\n\j\x\4\t\7\b\5\6\v\b\s\g\m\7\w\i\g\m\h\0\2\h\h\g\k\8\b\p\f\z\6\h\f\u\6\b\e\u\9\2\m\v\3\9\g\m\m\j\e\1\5\h\g\6\n\3\z\v\m\n\w\4\m\a\e\2\i\t\1\u\w\6\n\j\q\l\o\k\8\5\l\7\d\6\y\o\4\j\4\v\6\d\a\b\f\z\h\f\s\h\r\w\q\v\l\x\b\i\c\y\o\r\o\a\4\0\j\i\b\5\x\d\m\8\0\2\i\z\1\4\3\2\k\9\o\j\i\g\i\a\z\d\l\s\m\q\e\o\4\2\9\e\1\t\4\r\k\d\p\k\o\f\s\4\p\3\c\4\a\7\f\p\6\4\h\c\1\2\9\z\3\h\z\5\p\y\7\w\5\f\v\x\a\c\r\p\r\z\q\7\u\y\s\j\6\0\l\h\p\d\a\g\j\h\z\4\8\n\v\a\4\q\c\w\x\b\n\s\q\k\m\3\c\y\9\l\f\w\0\t\2\t\9\m\e\n\u\d\q\j\l\i\7\d\8\1\y\f\y\i\0\q\h\i\k\o\v\q\v\8\u\s\b\i\u\t\3\1\1\7\0\h\i\0\y\k\d\d\x\g\1\j\j ]]
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:13.182   05:03:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:13:13.182  [2024-11-20 05:03:27.016176] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:13.182  [2024-11-20 05:03:27.016478] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131306 ]
00:13:13.440  [2024-11-20 05:03:27.166338] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:13.440  [2024-11-20 05:03:27.190940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.440  [2024-11-20 05:03:27.220819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.440  
[2024-11-20T05:03:27.656Z] Copying: 512/512 [B] (average 500 kBps)
00:13:13.699  
00:13:13.699   05:03:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qxz6rxdqg9yjy30crqgqvg8kizhplispsir96gdxagcz2pxkin7zaydbzgrza6ktqxgjejzloophg6nd7hipq6r2u46vp86bh3pao4auxtv1h0knmhmtpe5orkueqhj3yiyb6p94zq8kbfp3yt860fqivq2w5nb9d134a132esqzfstri0g6kkzu6wckpkk76dxwu3u1u0wuxf5toxptuu5rdgduan3lpjqpdkkkffb69fcw4x689x2x83w4uwo50pxjszcaiw263f9jj7d5pw7feho3mfbucuj1i4vp7rscp3ngk33bt0v07zuxkex9gdjwk2nw4td42glsxc5gj6pgaf2i87cvdies42r7ovpx1xyq7gqup0it5ubvq6058j7ezfmfa480qsyiifpx8eomopwmy760mzfacq7ydiaz6d9bsx2anx5ecse567p9qvpb42danp7jq94oytpo0h6h262v8cyi7qrmmh2jxalazywjj9wlbn8zkao5259p == \q\x\z\6\r\x\d\q\g\9\y\j\y\3\0\c\r\q\g\q\v\g\8\k\i\z\h\p\l\i\s\p\s\i\r\9\6\g\d\x\a\g\c\z\2\p\x\k\i\n\7\z\a\y\d\b\z\g\r\z\a\6\k\t\q\x\g\j\e\j\z\l\o\o\p\h\g\6\n\d\7\h\i\p\q\6\r\2\u\4\6\v\p\8\6\b\h\3\p\a\o\4\a\u\x\t\v\1\h\0\k\n\m\h\m\t\p\e\5\o\r\k\u\e\q\h\j\3\y\i\y\b\6\p\9\4\z\q\8\k\b\f\p\3\y\t\8\6\0\f\q\i\v\q\2\w\5\n\b\9\d\1\3\4\a\1\3\2\e\s\q\z\f\s\t\r\i\0\g\6\k\k\z\u\6\w\c\k\p\k\k\7\6\d\x\w\u\3\u\1\u\0\w\u\x\f\5\t\o\x\p\t\u\u\5\r\d\g\d\u\a\n\3\l\p\j\q\p\d\k\k\k\f\f\b\6\9\f\c\w\4\x\6\8\9\x\2\x\8\3\w\4\u\w\o\5\0\p\x\j\s\z\c\a\i\w\2\6\3\f\9\j\j\7\d\5\p\w\7\f\e\h\o\3\m\f\b\u\c\u\j\1\i\4\v\p\7\r\s\c\p\3\n\g\k\3\3\b\t\0\v\0\7\z\u\x\k\e\x\9\g\d\j\w\k\2\n\w\4\t\d\4\2\g\l\s\x\c\5\g\j\6\p\g\a\f\2\i\8\7\c\v\d\i\e\s\4\2\r\7\o\v\p\x\1\x\y\q\7\g\q\u\p\0\i\t\5\u\b\v\q\6\0\5\8\j\7\e\z\f\m\f\a\4\8\0\q\s\y\i\i\f\p\x\8\e\o\m\o\p\w\m\y\7\6\0\m\z\f\a\c\q\7\y\d\i\a\z\6\d\9\b\s\x\2\a\n\x\5\e\c\s\e\5\6\7\p\9\q\v\p\b\4\2\d\a\n\p\7\j\q\9\4\o\y\t\p\o\0\h\6\h\2\6\2\v\8\c\y\i\7\q\r\m\m\h\2\j\x\a\l\a\z\y\w\j\j\9\w\l\b\n\8\z\k\a\o\5\2\5\9\p ]]
00:13:13.699   05:03:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:13.699   05:03:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:13:13.699  [2024-11-20 05:03:27.607497] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:13.699  [2024-11-20 05:03:27.607788] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131316 ]
00:13:13.958  [2024-11-20 05:03:27.758517] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:13.958  [2024-11-20 05:03:27.783867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:13.958  [2024-11-20 05:03:27.822088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:13.958  
[2024-11-20T05:03:28.173Z] Copying: 512/512 [B] (average 500 kBps)
00:13:14.216  
00:13:14.217   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qxz6rxdqg9yjy30crqgqvg8kizhplispsir96gdxagcz2pxkin7zaydbzgrza6ktqxgjejzloophg6nd7hipq6r2u46vp86bh3pao4auxtv1h0knmhmtpe5orkueqhj3yiyb6p94zq8kbfp3yt860fqivq2w5nb9d134a132esqzfstri0g6kkzu6wckpkk76dxwu3u1u0wuxf5toxptuu5rdgduan3lpjqpdkkkffb69fcw4x689x2x83w4uwo50pxjszcaiw263f9jj7d5pw7feho3mfbucuj1i4vp7rscp3ngk33bt0v07zuxkex9gdjwk2nw4td42glsxc5gj6pgaf2i87cvdies42r7ovpx1xyq7gqup0it5ubvq6058j7ezfmfa480qsyiifpx8eomopwmy760mzfacq7ydiaz6d9bsx2anx5ecse567p9qvpb42danp7jq94oytpo0h6h262v8cyi7qrmmh2jxalazywjj9wlbn8zkao5259p == \q\x\z\6\r\x\d\q\g\9\y\j\y\3\0\c\r\q\g\q\v\g\8\k\i\z\h\p\l\i\s\p\s\i\r\9\6\g\d\x\a\g\c\z\2\p\x\k\i\n\7\z\a\y\d\b\z\g\r\z\a\6\k\t\q\x\g\j\e\j\z\l\o\o\p\h\g\6\n\d\7\h\i\p\q\6\r\2\u\4\6\v\p\8\6\b\h\3\p\a\o\4\a\u\x\t\v\1\h\0\k\n\m\h\m\t\p\e\5\o\r\k\u\e\q\h\j\3\y\i\y\b\6\p\9\4\z\q\8\k\b\f\p\3\y\t\8\6\0\f\q\i\v\q\2\w\5\n\b\9\d\1\3\4\a\1\3\2\e\s\q\z\f\s\t\r\i\0\g\6\k\k\z\u\6\w\c\k\p\k\k\7\6\d\x\w\u\3\u\1\u\0\w\u\x\f\5\t\o\x\p\t\u\u\5\r\d\g\d\u\a\n\3\l\p\j\q\p\d\k\k\k\f\f\b\6\9\f\c\w\4\x\6\8\9\x\2\x\8\3\w\4\u\w\o\5\0\p\x\j\s\z\c\a\i\w\2\6\3\f\9\j\j\7\d\5\p\w\7\f\e\h\o\3\m\f\b\u\c\u\j\1\i\4\v\p\7\r\s\c\p\3\n\g\k\3\3\b\t\0\v\0\7\z\u\x\k\e\x\9\g\d\j\w\k\2\n\w\4\t\d\4\2\g\l\s\x\c\5\g\j\6\p\g\a\f\2\i\8\7\c\v\d\i\e\s\4\2\r\7\o\v\p\x\1\x\y\q\7\g\q\u\p\0\i\t\5\u\b\v\q\6\0\5\8\j\7\e\z\f\m\f\a\4\8\0\q\s\y\i\i\f\p\x\8\e\o\m\o\p\w\m\y\7\6\0\m\z\f\a\c\q\7\y\d\i\a\z\6\d\9\b\s\x\2\a\n\x\5\e\c\s\e\5\6\7\p\9\q\v\p\b\4\2\d\a\n\p\7\j\q\9\4\o\y\t\p\o\0\h\6\h\2\6\2\v\8\c\y\i\7\q\r\m\m\h\2\j\x\a\l\a\z\y\w\j\j\9\w\l\b\n\8\z\k\a\o\5\2\5\9\p ]]
00:13:14.217   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:14.217   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:13:14.475  [2024-11-20 05:03:28.223979] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:14.475  [2024-11-20 05:03:28.224246] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131328 ]
00:13:14.475  [2024-11-20 05:03:28.376538] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:14.475  [2024-11-20 05:03:28.405231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:14.735  [2024-11-20 05:03:28.440582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:14.735  
[2024-11-20T05:03:28.950Z] Copying: 512/512 [B] (average 250 kBps)
00:13:14.993  
00:13:14.993   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qxz6rxdqg9yjy30crqgqvg8kizhplispsir96gdxagcz2pxkin7zaydbzgrza6ktqxgjejzloophg6nd7hipq6r2u46vp86bh3pao4auxtv1h0knmhmtpe5orkueqhj3yiyb6p94zq8kbfp3yt860fqivq2w5nb9d134a132esqzfstri0g6kkzu6wckpkk76dxwu3u1u0wuxf5toxptuu5rdgduan3lpjqpdkkkffb69fcw4x689x2x83w4uwo50pxjszcaiw263f9jj7d5pw7feho3mfbucuj1i4vp7rscp3ngk33bt0v07zuxkex9gdjwk2nw4td42glsxc5gj6pgaf2i87cvdies42r7ovpx1xyq7gqup0it5ubvq6058j7ezfmfa480qsyiifpx8eomopwmy760mzfacq7ydiaz6d9bsx2anx5ecse567p9qvpb42danp7jq94oytpo0h6h262v8cyi7qrmmh2jxalazywjj9wlbn8zkao5259p == \q\x\z\6\r\x\d\q\g\9\y\j\y\3\0\c\r\q\g\q\v\g\8\k\i\z\h\p\l\i\s\p\s\i\r\9\6\g\d\x\a\g\c\z\2\p\x\k\i\n\7\z\a\y\d\b\z\g\r\z\a\6\k\t\q\x\g\j\e\j\z\l\o\o\p\h\g\6\n\d\7\h\i\p\q\6\r\2\u\4\6\v\p\8\6\b\h\3\p\a\o\4\a\u\x\t\v\1\h\0\k\n\m\h\m\t\p\e\5\o\r\k\u\e\q\h\j\3\y\i\y\b\6\p\9\4\z\q\8\k\b\f\p\3\y\t\8\6\0\f\q\i\v\q\2\w\5\n\b\9\d\1\3\4\a\1\3\2\e\s\q\z\f\s\t\r\i\0\g\6\k\k\z\u\6\w\c\k\p\k\k\7\6\d\x\w\u\3\u\1\u\0\w\u\x\f\5\t\o\x\p\t\u\u\5\r\d\g\d\u\a\n\3\l\p\j\q\p\d\k\k\k\f\f\b\6\9\f\c\w\4\x\6\8\9\x\2\x\8\3\w\4\u\w\o\5\0\p\x\j\s\z\c\a\i\w\2\6\3\f\9\j\j\7\d\5\p\w\7\f\e\h\o\3\m\f\b\u\c\u\j\1\i\4\v\p\7\r\s\c\p\3\n\g\k\3\3\b\t\0\v\0\7\z\u\x\k\e\x\9\g\d\j\w\k\2\n\w\4\t\d\4\2\g\l\s\x\c\5\g\j\6\p\g\a\f\2\i\8\7\c\v\d\i\e\s\4\2\r\7\o\v\p\x\1\x\y\q\7\g\q\u\p\0\i\t\5\u\b\v\q\6\0\5\8\j\7\e\z\f\m\f\a\4\8\0\q\s\y\i\i\f\p\x\8\e\o\m\o\p\w\m\y\7\6\0\m\z\f\a\c\q\7\y\d\i\a\z\6\d\9\b\s\x\2\a\n\x\5\e\c\s\e\5\6\7\p\9\q\v\p\b\4\2\d\a\n\p\7\j\q\9\4\o\y\t\p\o\0\h\6\h\2\6\2\v\8\c\y\i\7\q\r\m\m\h\2\j\x\a\l\a\z\y\w\j\j\9\w\l\b\n\8\z\k\a\o\5\2\5\9\p ]]
00:13:14.993   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:14.993   05:03:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:13:14.993  [2024-11-20 05:03:28.858965] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:14.993  [2024-11-20 05:03:28.859238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131345 ]
00:13:15.251  [2024-11-20 05:03:29.009813] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:15.252  [2024-11-20 05:03:29.035454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:15.252  [2024-11-20 05:03:29.080605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:15.252  
[2024-11-20T05:03:29.466Z] Copying: 512/512 [B] (average 166 kBps)
00:13:15.509  
00:13:15.509  ************************************
00:13:15.509  END TEST dd_flags_misc
00:13:15.509  ************************************
00:13:15.510   05:03:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ qxz6rxdqg9yjy30crqgqvg8kizhplispsir96gdxagcz2pxkin7zaydbzgrza6ktqxgjejzloophg6nd7hipq6r2u46vp86bh3pao4auxtv1h0knmhmtpe5orkueqhj3yiyb6p94zq8kbfp3yt860fqivq2w5nb9d134a132esqzfstri0g6kkzu6wckpkk76dxwu3u1u0wuxf5toxptuu5rdgduan3lpjqpdkkkffb69fcw4x689x2x83w4uwo50pxjszcaiw263f9jj7d5pw7feho3mfbucuj1i4vp7rscp3ngk33bt0v07zuxkex9gdjwk2nw4td42glsxc5gj6pgaf2i87cvdies42r7ovpx1xyq7gqup0it5ubvq6058j7ezfmfa480qsyiifpx8eomopwmy760mzfacq7ydiaz6d9bsx2anx5ecse567p9qvpb42danp7jq94oytpo0h6h262v8cyi7qrmmh2jxalazywjj9wlbn8zkao5259p == \q\x\z\6\r\x\d\q\g\9\y\j\y\3\0\c\r\q\g\q\v\g\8\k\i\z\h\p\l\i\s\p\s\i\r\9\6\g\d\x\a\g\c\z\2\p\x\k\i\n\7\z\a\y\d\b\z\g\r\z\a\6\k\t\q\x\g\j\e\j\z\l\o\o\p\h\g\6\n\d\7\h\i\p\q\6\r\2\u\4\6\v\p\8\6\b\h\3\p\a\o\4\a\u\x\t\v\1\h\0\k\n\m\h\m\t\p\e\5\o\r\k\u\e\q\h\j\3\y\i\y\b\6\p\9\4\z\q\8\k\b\f\p\3\y\t\8\6\0\f\q\i\v\q\2\w\5\n\b\9\d\1\3\4\a\1\3\2\e\s\q\z\f\s\t\r\i\0\g\6\k\k\z\u\6\w\c\k\p\k\k\7\6\d\x\w\u\3\u\1\u\0\w\u\x\f\5\t\o\x\p\t\u\u\5\r\d\g\d\u\a\n\3\l\p\j\q\p\d\k\k\k\f\f\b\6\9\f\c\w\4\x\6\8\9\x\2\x\8\3\w\4\u\w\o\5\0\p\x\j\s\z\c\a\i\w\2\6\3\f\9\j\j\7\d\5\p\w\7\f\e\h\o\3\m\f\b\u\c\u\j\1\i\4\v\p\7\r\s\c\p\3\n\g\k\3\3\b\t\0\v\0\7\z\u\x\k\e\x\9\g\d\j\w\k\2\n\w\4\t\d\4\2\g\l\s\x\c\5\g\j\6\p\g\a\f\2\i\8\7\c\v\d\i\e\s\4\2\r\7\o\v\p\x\1\x\y\q\7\g\q\u\p\0\i\t\5\u\b\v\q\6\0\5\8\j\7\e\z\f\m\f\a\4\8\0\q\s\y\i\i\f\p\x\8\e\o\m\o\p\w\m\y\7\6\0\m\z\f\a\c\q\7\y\d\i\a\z\6\d\9\b\s\x\2\a\n\x\5\e\c\s\e\5\6\7\p\9\q\v\p\b\4\2\d\a\n\p\7\j\q\9\4\o\y\t\p\o\0\h\6\h\2\6\2\v\8\c\y\i\7\q\r\m\m\h\2\j\x\a\l\a\z\y\w\j\j\9\w\l\b\n\8\z\k\a\o\5\2\5\9\p ]]
00:13:15.510  
00:13:15.510  real	0m4.942s
00:13:15.510  user	0m2.150s
00:13:15.510  sys	0m1.622s
00:13:15.510   05:03:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:15.510   05:03:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', using AIO'
00:13:15.768  * Second test run, using AIO
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio")
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:15.768  ************************************
00:13:15.768  START TEST dd_flag_append_forced_aio
00:13:15.768  ************************************
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=u2tnbge9tf9nsx5xes0wh7i6q3s2bf33
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:15.768    05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=tlth5yjf5ex28riidtvetcswob0epzu3
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s u2tnbge9tf9nsx5xes0wh7i6q3s2bf33
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s tlth5yjf5ex28riidtvetcswob0epzu3
00:13:15.768   05:03:29 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append
00:13:15.768  [2024-11-20 05:03:29.541848] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:15.769  [2024-11-20 05:03:29.542046] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131378 ]
00:13:15.769  [2024-11-20 05:03:29.675560] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:15.769  [2024-11-20 05:03:29.700713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:16.027  [2024-11-20 05:03:29.739302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:16.027  
[2024-11-20T05:03:30.243Z] Copying: 32/32 [B] (average 31 kBps)
00:13:16.286  
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ tlth5yjf5ex28riidtvetcswob0epzu3u2tnbge9tf9nsx5xes0wh7i6q3s2bf33 == \t\l\t\h\5\y\j\f\5\e\x\2\8\r\i\i\d\t\v\e\t\c\s\w\o\b\0\e\p\z\u\3\u\2\t\n\b\g\e\9\t\f\9\n\s\x\5\x\e\s\0\w\h\7\i\6\q\3\s\2\b\f\3\3 ]]
00:13:16.286  
00:13:16.286  real	0m0.592s
00:13:16.286  user	0m0.256s
00:13:16.286  sys	0m0.189s
00:13:16.286  ************************************
00:13:16.286  END TEST dd_flag_append_forced_aio
00:13:16.286  ************************************
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:16.286  ************************************
00:13:16.286  START TEST dd_flag_directory_forced_aio
00:13:16.286  ************************************
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.286    05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.286    05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:16.286   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:16.286  [2024-11-20 05:03:30.204028] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:16.286  [2024-11-20 05:03:30.204329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131412 ]
00:13:16.544  [2024-11-20 05:03:30.354368] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:16.544  [2024-11-20 05:03:30.378327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:16.544  [2024-11-20 05:03:30.423492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:16.803  [2024-11-20 05:03:30.515325] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:16.803  [2024-11-20 05:03:30.515521] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:16.803  [2024-11-20 05:03:30.515561] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:16.803  [2024-11-20 05:03:30.630648] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.803    05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.803    05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:16.803   05:03:30 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory
00:13:17.062  [2024-11-20 05:03:30.786667] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:17.062  [2024-11-20 05:03:30.786975] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131432 ]
00:13:17.062  [2024-11-20 05:03:30.936403] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:17.062  [2024-11-20 05:03:30.961721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:17.062  [2024-11-20 05:03:30.999707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:17.321  [2024-11-20 05:03:31.088187] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:17.321  [2024-11-20 05:03:31.088288] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory
00:13:17.321  [2024-11-20 05:03:31.088335] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:17.321  [2024-11-20 05:03:31.202504] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:17.580  
00:13:17.580  real	0m1.160s
00:13:17.580  user	0m0.543s
00:13:17.580  sys	0m0.418s
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:17.580  ************************************
00:13:17.580  END TEST dd_flag_directory_forced_aio
00:13:17.580  ************************************
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:17.580  ************************************
00:13:17.580  START TEST dd_flag_nofollow_forced_aio
00:13:17.580  ************************************
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:17.580    05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:17.580    05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:17.580   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:17.580  [2024-11-20 05:03:31.423477] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:17.580  [2024-11-20 05:03:31.423743] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131459 ]
00:13:17.839  [2024-11-20 05:03:31.574015] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:17.839  [2024-11-20 05:03:31.596314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:17.839  [2024-11-20 05:03:31.631887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:17.840  [2024-11-20 05:03:31.709742] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:13:17.840  [2024-11-20 05:03:31.709831] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links
00:13:17.840  [2024-11-20 05:03:31.709878] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:18.098  [2024-11-20 05:03:31.827995] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:18.098    05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:18.098    05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:18.098   05:03:31 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow
00:13:18.098  [2024-11-20 05:03:31.986066] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:18.098  [2024-11-20 05:03:31.986364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131480 ]
00:13:18.357  [2024-11-20 05:03:32.136535] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:18.357  [2024-11-20 05:03:32.156013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:18.357  [2024-11-20 05:03:32.194997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:18.357  [2024-11-20 05:03:32.284851] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:13:18.357  [2024-11-20 05:03:32.285224] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links
00:13:18.357  [2024-11-20 05:03:32.285310] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:18.616  [2024-11-20 05:03:32.399726] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:18.616   05:03:32 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:18.616  [2024-11-20 05:03:32.563569] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:18.616  [2024-11-20 05:03:32.563834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131483 ]
00:13:18.874  [2024-11-20 05:03:32.712734] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:18.874  [2024-11-20 05:03:32.737503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:18.874  [2024-11-20 05:03:32.768895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:19.133  
[2024-11-20T05:03:33.090Z] Copying: 512/512 [B] (average 500 kBps)
00:13:19.133  
00:13:19.391  ************************************
00:13:19.391  END TEST dd_flag_nofollow_forced_aio
00:13:19.391  ************************************
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ raeo8lqrd0wiv27tgy4qeuwi0594n28fogg0idfxnyroxxyfpxsyqtqkku4tw0lwdvvtbrsmseazw3g8jaz88lc7v7794ou6dq82zkjia8tr06i0q2hkx02zjie4cbdn1xplgbde8ap88q7kwy6jfv6527auk0b5pi22anspgbndwtpswsi3a3fa88gdx4f2n1mxnpck0p5zq11yc0tybom5ttia7wryvr0gnh5n5gbay4jifvdflh2w96vvldyz0r4xxbg678ck31n5rco7w6dw7919d4ggav4atj7hl9jhosvhmptqcb97qdshq7544psr3anzmjxjy096ku1rfjtdkpkzidjt6n7j9sstk3rcskecmxwa4jp2wei8140xd0bsfipp7n4seabbe4gi0e114abawrnkdcsh48iepocbap93qchtoa4qbaxvdstzn4skng9rp95ubq3z37epttw99bqhljv3weei3y4xigrjywvtrl4qjt121bapxhzd == \r\a\e\o\8\l\q\r\d\0\w\i\v\2\7\t\g\y\4\q\e\u\w\i\0\5\9\4\n\2\8\f\o\g\g\0\i\d\f\x\n\y\r\o\x\x\y\f\p\x\s\y\q\t\q\k\k\u\4\t\w\0\l\w\d\v\v\t\b\r\s\m\s\e\a\z\w\3\g\8\j\a\z\8\8\l\c\7\v\7\7\9\4\o\u\6\d\q\8\2\z\k\j\i\a\8\t\r\0\6\i\0\q\2\h\k\x\0\2\z\j\i\e\4\c\b\d\n\1\x\p\l\g\b\d\e\8\a\p\8\8\q\7\k\w\y\6\j\f\v\6\5\2\7\a\u\k\0\b\5\p\i\2\2\a\n\s\p\g\b\n\d\w\t\p\s\w\s\i\3\a\3\f\a\8\8\g\d\x\4\f\2\n\1\m\x\n\p\c\k\0\p\5\z\q\1\1\y\c\0\t\y\b\o\m\5\t\t\i\a\7\w\r\y\v\r\0\g\n\h\5\n\5\g\b\a\y\4\j\i\f\v\d\f\l\h\2\w\9\6\v\v\l\d\y\z\0\r\4\x\x\b\g\6\7\8\c\k\3\1\n\5\r\c\o\7\w\6\d\w\7\9\1\9\d\4\g\g\a\v\4\a\t\j\7\h\l\9\j\h\o\s\v\h\m\p\t\q\c\b\9\7\q\d\s\h\q\7\5\4\4\p\s\r\3\a\n\z\m\j\x\j\y\0\9\6\k\u\1\r\f\j\t\d\k\p\k\z\i\d\j\t\6\n\7\j\9\s\s\t\k\3\r\c\s\k\e\c\m\x\w\a\4\j\p\2\w\e\i\8\1\4\0\x\d\0\b\s\f\i\p\p\7\n\4\s\e\a\b\b\e\4\g\i\0\e\1\1\4\a\b\a\w\r\n\k\d\c\s\h\4\8\i\e\p\o\c\b\a\p\9\3\q\c\h\t\o\a\4\q\b\a\x\v\d\s\t\z\n\4\s\k\n\g\9\r\p\9\5\u\b\q\3\z\3\7\e\p\t\t\w\9\9\b\q\h\l\j\v\3\w\e\e\i\3\y\4\x\i\g\r\j\y\w\v\t\r\l\4\q\j\t\1\2\1\b\a\p\x\h\z\d ]]
00:13:19.391  
00:13:19.391  real	0m1.740s
00:13:19.391  user	0m0.822s
00:13:19.391  sys	0m0.587s
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:19.391  ************************************
00:13:19.391  START TEST dd_flag_noatime_forced_aio
00:13:19.391  ************************************
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:19.391    05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1732079012
00:13:19.391    05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1732079013
00:13:19.391   05:03:33 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1
00:13:20.328   05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:20.328  [2024-11-20 05:03:34.242748] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:20.328  [2024-11-20 05:03:34.243086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131540 ]
00:13:20.587  [2024-11-20 05:03:34.394967] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:20.587  [2024-11-20 05:03:34.417735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:20.587  [2024-11-20 05:03:34.456140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:20.845  
[2024-11-20T05:03:34.802Z] Copying: 512/512 [B] (average 500 kBps)
00:13:20.845  
00:13:20.845    05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:20.846   05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1732079012 ))
00:13:20.846    05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:21.104   05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1732079013 ))
00:13:21.104   05:03:34 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:21.104  [2024-11-20 05:03:34.859750] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:21.104  [2024-11-20 05:03:34.860018] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131547 ]
00:13:21.104  [2024-11-20 05:03:35.007939] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:21.104  [2024-11-20 05:03:35.033697] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:21.362  [2024-11-20 05:03:35.068864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:21.362  
[2024-11-20T05:03:35.579Z] Copying: 512/512 [B] (average 500 kBps)
00:13:21.622  
00:13:21.622    05:03:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1732079015 ))
00:13:21.622  
00:13:21.622  real	0m2.255s
00:13:21.622  user	0m0.553s
00:13:21.622  sys	0m0.431s
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:21.622  ************************************
00:13:21.622  END TEST dd_flag_noatime_forced_aio
00:13:21.622  ************************************
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:21.622  ************************************
00:13:21.622  START TEST dd_flags_misc_forced_aio
00:13:21.622  ************************************
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock)
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync)
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:21.622   05:03:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:13:21.622  [2024-11-20 05:03:35.529053] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:21.622  [2024-11-20 05:03:35.529924] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131583 ]
00:13:21.880  [2024-11-20 05:03:35.682598] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:21.880  [2024-11-20 05:03:35.705721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:21.880  [2024-11-20 05:03:35.744883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:21.880  
[2024-11-20T05:03:36.096Z] Copying: 512/512 [B] (average 500 kBps)
00:13:22.139  
00:13:22.399   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v007axj393vgm4iyfynp949kf84a2y51m4d69xsomp691pyit2ny6k7mxijoqaxac4i6no2p1rr1a2j7zutwef8ccki5eahslbvbqd0g201yfb7662z53nhsxzi8n03map29i601xt6grzdhxk7hpcf95q9rpwzlibhakqc4h9jpmrsj9buiagkv8xjyeq9hlfkvp4ikwefi3fcfjmf0hfhp3wt81m4leh86q8cixc7d5yg3a8ka7f65h6nrlwn2kuoq81ic53oakjrxjo83wiy6569b3lphdfev7uxqd4tdk15fxfagbbu1ffwe7o43eccaawwinpdcrxa4w9ehyblk07zsetk8sj4w6jzwotjlfuifht82swhpjdf0q4xnfn7cmwk22tc2wgp1y41hadzbzaqe51fqkhy0hyxtyc4mq8bl0amhlc6wdzhfqm8i7otr5lf09vysfvkjc775iirn9sr8cu3xrlowq28i39p4h3lisb8x5n2ikhyysksh == \v\0\0\7\a\x\j\3\9\3\v\g\m\4\i\y\f\y\n\p\9\4\9\k\f\8\4\a\2\y\5\1\m\4\d\6\9\x\s\o\m\p\6\9\1\p\y\i\t\2\n\y\6\k\7\m\x\i\j\o\q\a\x\a\c\4\i\6\n\o\2\p\1\r\r\1\a\2\j\7\z\u\t\w\e\f\8\c\c\k\i\5\e\a\h\s\l\b\v\b\q\d\0\g\2\0\1\y\f\b\7\6\6\2\z\5\3\n\h\s\x\z\i\8\n\0\3\m\a\p\2\9\i\6\0\1\x\t\6\g\r\z\d\h\x\k\7\h\p\c\f\9\5\q\9\r\p\w\z\l\i\b\h\a\k\q\c\4\h\9\j\p\m\r\s\j\9\b\u\i\a\g\k\v\8\x\j\y\e\q\9\h\l\f\k\v\p\4\i\k\w\e\f\i\3\f\c\f\j\m\f\0\h\f\h\p\3\w\t\8\1\m\4\l\e\h\8\6\q\8\c\i\x\c\7\d\5\y\g\3\a\8\k\a\7\f\6\5\h\6\n\r\l\w\n\2\k\u\o\q\8\1\i\c\5\3\o\a\k\j\r\x\j\o\8\3\w\i\y\6\5\6\9\b\3\l\p\h\d\f\e\v\7\u\x\q\d\4\t\d\k\1\5\f\x\f\a\g\b\b\u\1\f\f\w\e\7\o\4\3\e\c\c\a\a\w\w\i\n\p\d\c\r\x\a\4\w\9\e\h\y\b\l\k\0\7\z\s\e\t\k\8\s\j\4\w\6\j\z\w\o\t\j\l\f\u\i\f\h\t\8\2\s\w\h\p\j\d\f\0\q\4\x\n\f\n\7\c\m\w\k\2\2\t\c\2\w\g\p\1\y\4\1\h\a\d\z\b\z\a\q\e\5\1\f\q\k\h\y\0\h\y\x\t\y\c\4\m\q\8\b\l\0\a\m\h\l\c\6\w\d\z\h\f\q\m\8\i\7\o\t\r\5\l\f\0\9\v\y\s\f\v\k\j\c\7\7\5\i\i\r\n\9\s\r\8\c\u\3\x\r\l\o\w\q\2\8\i\3\9\p\4\h\3\l\i\s\b\8\x\5\n\2\i\k\h\y\y\s\k\s\h ]]
00:13:22.399   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:22.399   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:13:22.399  [2024-11-20 05:03:36.162387] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:22.399  [2024-11-20 05:03:36.163191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131604 ]
00:13:22.399  [2024-11-20 05:03:36.315367] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:22.399  [2024-11-20 05:03:36.338526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:22.658  [2024-11-20 05:03:36.376775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:22.658  
[2024-11-20T05:03:36.874Z] Copying: 512/512 [B] (average 500 kBps)
00:13:22.917  
00:13:22.917   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v007axj393vgm4iyfynp949kf84a2y51m4d69xsomp691pyit2ny6k7mxijoqaxac4i6no2p1rr1a2j7zutwef8ccki5eahslbvbqd0g201yfb7662z53nhsxzi8n03map29i601xt6grzdhxk7hpcf95q9rpwzlibhakqc4h9jpmrsj9buiagkv8xjyeq9hlfkvp4ikwefi3fcfjmf0hfhp3wt81m4leh86q8cixc7d5yg3a8ka7f65h6nrlwn2kuoq81ic53oakjrxjo83wiy6569b3lphdfev7uxqd4tdk15fxfagbbu1ffwe7o43eccaawwinpdcrxa4w9ehyblk07zsetk8sj4w6jzwotjlfuifht82swhpjdf0q4xnfn7cmwk22tc2wgp1y41hadzbzaqe51fqkhy0hyxtyc4mq8bl0amhlc6wdzhfqm8i7otr5lf09vysfvkjc775iirn9sr8cu3xrlowq28i39p4h3lisb8x5n2ikhyysksh == \v\0\0\7\a\x\j\3\9\3\v\g\m\4\i\y\f\y\n\p\9\4\9\k\f\8\4\a\2\y\5\1\m\4\d\6\9\x\s\o\m\p\6\9\1\p\y\i\t\2\n\y\6\k\7\m\x\i\j\o\q\a\x\a\c\4\i\6\n\o\2\p\1\r\r\1\a\2\j\7\z\u\t\w\e\f\8\c\c\k\i\5\e\a\h\s\l\b\v\b\q\d\0\g\2\0\1\y\f\b\7\6\6\2\z\5\3\n\h\s\x\z\i\8\n\0\3\m\a\p\2\9\i\6\0\1\x\t\6\g\r\z\d\h\x\k\7\h\p\c\f\9\5\q\9\r\p\w\z\l\i\b\h\a\k\q\c\4\h\9\j\p\m\r\s\j\9\b\u\i\a\g\k\v\8\x\j\y\e\q\9\h\l\f\k\v\p\4\i\k\w\e\f\i\3\f\c\f\j\m\f\0\h\f\h\p\3\w\t\8\1\m\4\l\e\h\8\6\q\8\c\i\x\c\7\d\5\y\g\3\a\8\k\a\7\f\6\5\h\6\n\r\l\w\n\2\k\u\o\q\8\1\i\c\5\3\o\a\k\j\r\x\j\o\8\3\w\i\y\6\5\6\9\b\3\l\p\h\d\f\e\v\7\u\x\q\d\4\t\d\k\1\5\f\x\f\a\g\b\b\u\1\f\f\w\e\7\o\4\3\e\c\c\a\a\w\w\i\n\p\d\c\r\x\a\4\w\9\e\h\y\b\l\k\0\7\z\s\e\t\k\8\s\j\4\w\6\j\z\w\o\t\j\l\f\u\i\f\h\t\8\2\s\w\h\p\j\d\f\0\q\4\x\n\f\n\7\c\m\w\k\2\2\t\c\2\w\g\p\1\y\4\1\h\a\d\z\b\z\a\q\e\5\1\f\q\k\h\y\0\h\y\x\t\y\c\4\m\q\8\b\l\0\a\m\h\l\c\6\w\d\z\h\f\q\m\8\i\7\o\t\r\5\l\f\0\9\v\y\s\f\v\k\j\c\7\7\5\i\i\r\n\9\s\r\8\c\u\3\x\r\l\o\w\q\2\8\i\3\9\p\4\h\3\l\i\s\b\8\x\5\n\2\i\k\h\y\y\s\k\s\h ]]
00:13:22.917   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:22.917   05:03:36 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:13:22.917  [2024-11-20 05:03:36.789844] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:22.917  [2024-11-20 05:03:36.790142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131609 ]
00:13:23.176  [2024-11-20 05:03:36.940653] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:23.176  [2024-11-20 05:03:36.964227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:23.176  [2024-11-20 05:03:37.001468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:23.176  
[2024-11-20T05:03:37.392Z] Copying: 512/512 [B] (average 166 kBps)
00:13:23.435  
00:13:23.435   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v007axj393vgm4iyfynp949kf84a2y51m4d69xsomp691pyit2ny6k7mxijoqaxac4i6no2p1rr1a2j7zutwef8ccki5eahslbvbqd0g201yfb7662z53nhsxzi8n03map29i601xt6grzdhxk7hpcf95q9rpwzlibhakqc4h9jpmrsj9buiagkv8xjyeq9hlfkvp4ikwefi3fcfjmf0hfhp3wt81m4leh86q8cixc7d5yg3a8ka7f65h6nrlwn2kuoq81ic53oakjrxjo83wiy6569b3lphdfev7uxqd4tdk15fxfagbbu1ffwe7o43eccaawwinpdcrxa4w9ehyblk07zsetk8sj4w6jzwotjlfuifht82swhpjdf0q4xnfn7cmwk22tc2wgp1y41hadzbzaqe51fqkhy0hyxtyc4mq8bl0amhlc6wdzhfqm8i7otr5lf09vysfvkjc775iirn9sr8cu3xrlowq28i39p4h3lisb8x5n2ikhyysksh == \v\0\0\7\a\x\j\3\9\3\v\g\m\4\i\y\f\y\n\p\9\4\9\k\f\8\4\a\2\y\5\1\m\4\d\6\9\x\s\o\m\p\6\9\1\p\y\i\t\2\n\y\6\k\7\m\x\i\j\o\q\a\x\a\c\4\i\6\n\o\2\p\1\r\r\1\a\2\j\7\z\u\t\w\e\f\8\c\c\k\i\5\e\a\h\s\l\b\v\b\q\d\0\g\2\0\1\y\f\b\7\6\6\2\z\5\3\n\h\s\x\z\i\8\n\0\3\m\a\p\2\9\i\6\0\1\x\t\6\g\r\z\d\h\x\k\7\h\p\c\f\9\5\q\9\r\p\w\z\l\i\b\h\a\k\q\c\4\h\9\j\p\m\r\s\j\9\b\u\i\a\g\k\v\8\x\j\y\e\q\9\h\l\f\k\v\p\4\i\k\w\e\f\i\3\f\c\f\j\m\f\0\h\f\h\p\3\w\t\8\1\m\4\l\e\h\8\6\q\8\c\i\x\c\7\d\5\y\g\3\a\8\k\a\7\f\6\5\h\6\n\r\l\w\n\2\k\u\o\q\8\1\i\c\5\3\o\a\k\j\r\x\j\o\8\3\w\i\y\6\5\6\9\b\3\l\p\h\d\f\e\v\7\u\x\q\d\4\t\d\k\1\5\f\x\f\a\g\b\b\u\1\f\f\w\e\7\o\4\3\e\c\c\a\a\w\w\i\n\p\d\c\r\x\a\4\w\9\e\h\y\b\l\k\0\7\z\s\e\t\k\8\s\j\4\w\6\j\z\w\o\t\j\l\f\u\i\f\h\t\8\2\s\w\h\p\j\d\f\0\q\4\x\n\f\n\7\c\m\w\k\2\2\t\c\2\w\g\p\1\y\4\1\h\a\d\z\b\z\a\q\e\5\1\f\q\k\h\y\0\h\y\x\t\y\c\4\m\q\8\b\l\0\a\m\h\l\c\6\w\d\z\h\f\q\m\8\i\7\o\t\r\5\l\f\0\9\v\y\s\f\v\k\j\c\7\7\5\i\i\r\n\9\s\r\8\c\u\3\x\r\l\o\w\q\2\8\i\3\9\p\4\h\3\l\i\s\b\8\x\5\n\2\i\k\h\y\y\s\k\s\h ]]
00:13:23.435   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:23.435   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:13:23.693  [2024-11-20 05:03:37.410147] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:23.693  [2024-11-20 05:03:37.410453] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131626 ]
00:13:23.693  [2024-11-20 05:03:37.560282] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:23.693  [2024-11-20 05:03:37.585455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:23.693  [2024-11-20 05:03:37.622549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:23.952  
[2024-11-20T05:03:38.169Z] Copying: 512/512 [B] (average 250 kBps)
00:13:24.212  
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ v007axj393vgm4iyfynp949kf84a2y51m4d69xsomp691pyit2ny6k7mxijoqaxac4i6no2p1rr1a2j7zutwef8ccki5eahslbvbqd0g201yfb7662z53nhsxzi8n03map29i601xt6grzdhxk7hpcf95q9rpwzlibhakqc4h9jpmrsj9buiagkv8xjyeq9hlfkvp4ikwefi3fcfjmf0hfhp3wt81m4leh86q8cixc7d5yg3a8ka7f65h6nrlwn2kuoq81ic53oakjrxjo83wiy6569b3lphdfev7uxqd4tdk15fxfagbbu1ffwe7o43eccaawwinpdcrxa4w9ehyblk07zsetk8sj4w6jzwotjlfuifht82swhpjdf0q4xnfn7cmwk22tc2wgp1y41hadzbzaqe51fqkhy0hyxtyc4mq8bl0amhlc6wdzhfqm8i7otr5lf09vysfvkjc775iirn9sr8cu3xrlowq28i39p4h3lisb8x5n2ikhyysksh == \v\0\0\7\a\x\j\3\9\3\v\g\m\4\i\y\f\y\n\p\9\4\9\k\f\8\4\a\2\y\5\1\m\4\d\6\9\x\s\o\m\p\6\9\1\p\y\i\t\2\n\y\6\k\7\m\x\i\j\o\q\a\x\a\c\4\i\6\n\o\2\p\1\r\r\1\a\2\j\7\z\u\t\w\e\f\8\c\c\k\i\5\e\a\h\s\l\b\v\b\q\d\0\g\2\0\1\y\f\b\7\6\6\2\z\5\3\n\h\s\x\z\i\8\n\0\3\m\a\p\2\9\i\6\0\1\x\t\6\g\r\z\d\h\x\k\7\h\p\c\f\9\5\q\9\r\p\w\z\l\i\b\h\a\k\q\c\4\h\9\j\p\m\r\s\j\9\b\u\i\a\g\k\v\8\x\j\y\e\q\9\h\l\f\k\v\p\4\i\k\w\e\f\i\3\f\c\f\j\m\f\0\h\f\h\p\3\w\t\8\1\m\4\l\e\h\8\6\q\8\c\i\x\c\7\d\5\y\g\3\a\8\k\a\7\f\6\5\h\6\n\r\l\w\n\2\k\u\o\q\8\1\i\c\5\3\o\a\k\j\r\x\j\o\8\3\w\i\y\6\5\6\9\b\3\l\p\h\d\f\e\v\7\u\x\q\d\4\t\d\k\1\5\f\x\f\a\g\b\b\u\1\f\f\w\e\7\o\4\3\e\c\c\a\a\w\w\i\n\p\d\c\r\x\a\4\w\9\e\h\y\b\l\k\0\7\z\s\e\t\k\8\s\j\4\w\6\j\z\w\o\t\j\l\f\u\i\f\h\t\8\2\s\w\h\p\j\d\f\0\q\4\x\n\f\n\7\c\m\w\k\2\2\t\c\2\w\g\p\1\y\4\1\h\a\d\z\b\z\a\q\e\5\1\f\q\k\h\y\0\h\y\x\t\y\c\4\m\q\8\b\l\0\a\m\h\l\c\6\w\d\z\h\f\q\m\8\i\7\o\t\r\5\l\f\0\9\v\y\s\f\v\k\j\c\7\7\5\i\i\r\n\9\s\r\8\c\u\3\x\r\l\o\w\q\2\8\i\3\9\p\4\h\3\l\i\s\b\8\x\5\n\2\i\k\h\y\y\s\k\s\h ]]
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}"
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:24.212   05:03:37 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct
00:13:24.212  [2024-11-20 05:03:38.041622] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:24.212  [2024-11-20 05:03:38.041909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131638 ]
00:13:24.471  [2024-11-20 05:03:38.192406] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:24.471  [2024-11-20 05:03:38.219610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:24.471  [2024-11-20 05:03:38.260663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:24.471  
[2024-11-20T05:03:38.687Z] Copying: 512/512 [B] (average 500 kBps)
00:13:24.730  
00:13:24.730   05:03:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6jmcb1dn0g9brr61jp1grfrybl6uif739qrpiuzxcsnfsddzjst38vusnt84ks43l3enq2jgfcoaj93sqt6mbcavw01nmi87wumsd0zqcr5h2iousc1d616z32184mk8761oxuop0kweri29zpgaa26vf5d1l7qnraeym83fs44bdeg2qiss4p24k9apldbp6fr83n62bwb2yn4avpfvdg7wntmigfxq5q0kcp7rltlup7qkajz6mh3qjhsfxwnp5a4641i11etowgx66lu66c96087k8jv8ucyqnixmgixv7w5kwj347n4cbwlke2nmi6x7f909lbitshfxzsdbygi76sac00k0jb14vv0wcaerlxrctdvsp7zxxvj0egmoc0zc1gbe6o0deb7gkxrrvftuu4zua3pidrgfl1eax3brly6um8hut2timzem09j3bcx462pop9oydkgmcyotrwp60qk94398jqnp2xw1hethp8febl8w4t62xhpf5uvk == \6\j\m\c\b\1\d\n\0\g\9\b\r\r\6\1\j\p\1\g\r\f\r\y\b\l\6\u\i\f\7\3\9\q\r\p\i\u\z\x\c\s\n\f\s\d\d\z\j\s\t\3\8\v\u\s\n\t\8\4\k\s\4\3\l\3\e\n\q\2\j\g\f\c\o\a\j\9\3\s\q\t\6\m\b\c\a\v\w\0\1\n\m\i\8\7\w\u\m\s\d\0\z\q\c\r\5\h\2\i\o\u\s\c\1\d\6\1\6\z\3\2\1\8\4\m\k\8\7\6\1\o\x\u\o\p\0\k\w\e\r\i\2\9\z\p\g\a\a\2\6\v\f\5\d\1\l\7\q\n\r\a\e\y\m\8\3\f\s\4\4\b\d\e\g\2\q\i\s\s\4\p\2\4\k\9\a\p\l\d\b\p\6\f\r\8\3\n\6\2\b\w\b\2\y\n\4\a\v\p\f\v\d\g\7\w\n\t\m\i\g\f\x\q\5\q\0\k\c\p\7\r\l\t\l\u\p\7\q\k\a\j\z\6\m\h\3\q\j\h\s\f\x\w\n\p\5\a\4\6\4\1\i\1\1\e\t\o\w\g\x\6\6\l\u\6\6\c\9\6\0\8\7\k\8\j\v\8\u\c\y\q\n\i\x\m\g\i\x\v\7\w\5\k\w\j\3\4\7\n\4\c\b\w\l\k\e\2\n\m\i\6\x\7\f\9\0\9\l\b\i\t\s\h\f\x\z\s\d\b\y\g\i\7\6\s\a\c\0\0\k\0\j\b\1\4\v\v\0\w\c\a\e\r\l\x\r\c\t\d\v\s\p\7\z\x\x\v\j\0\e\g\m\o\c\0\z\c\1\g\b\e\6\o\0\d\e\b\7\g\k\x\r\r\v\f\t\u\u\4\z\u\a\3\p\i\d\r\g\f\l\1\e\a\x\3\b\r\l\y\6\u\m\8\h\u\t\2\t\i\m\z\e\m\0\9\j\3\b\c\x\4\6\2\p\o\p\9\o\y\d\k\g\m\c\y\o\t\r\w\p\6\0\q\k\9\4\3\9\8\j\q\n\p\2\x\w\1\h\e\t\h\p\8\f\e\b\l\8\w\4\t\6\2\x\h\p\f\5\u\v\k ]]
00:13:24.730   05:03:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:24.730   05:03:38 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock
00:13:24.730  [2024-11-20 05:03:38.672179] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:24.730  [2024-11-20 05:03:38.672467] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131648 ]
00:13:24.989  [2024-11-20 05:03:38.821920] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:24.989  [2024-11-20 05:03:38.846430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:24.989  [2024-11-20 05:03:38.883642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:25.247  
[2024-11-20T05:03:39.463Z] Copying: 512/512 [B] (average 500 kBps)
00:13:25.506  
00:13:25.506   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6jmcb1dn0g9brr61jp1grfrybl6uif739qrpiuzxcsnfsddzjst38vusnt84ks43l3enq2jgfcoaj93sqt6mbcavw01nmi87wumsd0zqcr5h2iousc1d616z32184mk8761oxuop0kweri29zpgaa26vf5d1l7qnraeym83fs44bdeg2qiss4p24k9apldbp6fr83n62bwb2yn4avpfvdg7wntmigfxq5q0kcp7rltlup7qkajz6mh3qjhsfxwnp5a4641i11etowgx66lu66c96087k8jv8ucyqnixmgixv7w5kwj347n4cbwlke2nmi6x7f909lbitshfxzsdbygi76sac00k0jb14vv0wcaerlxrctdvsp7zxxvj0egmoc0zc1gbe6o0deb7gkxrrvftuu4zua3pidrgfl1eax3brly6um8hut2timzem09j3bcx462pop9oydkgmcyotrwp60qk94398jqnp2xw1hethp8febl8w4t62xhpf5uvk == \6\j\m\c\b\1\d\n\0\g\9\b\r\r\6\1\j\p\1\g\r\f\r\y\b\l\6\u\i\f\7\3\9\q\r\p\i\u\z\x\c\s\n\f\s\d\d\z\j\s\t\3\8\v\u\s\n\t\8\4\k\s\4\3\l\3\e\n\q\2\j\g\f\c\o\a\j\9\3\s\q\t\6\m\b\c\a\v\w\0\1\n\m\i\8\7\w\u\m\s\d\0\z\q\c\r\5\h\2\i\o\u\s\c\1\d\6\1\6\z\3\2\1\8\4\m\k\8\7\6\1\o\x\u\o\p\0\k\w\e\r\i\2\9\z\p\g\a\a\2\6\v\f\5\d\1\l\7\q\n\r\a\e\y\m\8\3\f\s\4\4\b\d\e\g\2\q\i\s\s\4\p\2\4\k\9\a\p\l\d\b\p\6\f\r\8\3\n\6\2\b\w\b\2\y\n\4\a\v\p\f\v\d\g\7\w\n\t\m\i\g\f\x\q\5\q\0\k\c\p\7\r\l\t\l\u\p\7\q\k\a\j\z\6\m\h\3\q\j\h\s\f\x\w\n\p\5\a\4\6\4\1\i\1\1\e\t\o\w\g\x\6\6\l\u\6\6\c\9\6\0\8\7\k\8\j\v\8\u\c\y\q\n\i\x\m\g\i\x\v\7\w\5\k\w\j\3\4\7\n\4\c\b\w\l\k\e\2\n\m\i\6\x\7\f\9\0\9\l\b\i\t\s\h\f\x\z\s\d\b\y\g\i\7\6\s\a\c\0\0\k\0\j\b\1\4\v\v\0\w\c\a\e\r\l\x\r\c\t\d\v\s\p\7\z\x\x\v\j\0\e\g\m\o\c\0\z\c\1\g\b\e\6\o\0\d\e\b\7\g\k\x\r\r\v\f\t\u\u\4\z\u\a\3\p\i\d\r\g\f\l\1\e\a\x\3\b\r\l\y\6\u\m\8\h\u\t\2\t\i\m\z\e\m\0\9\j\3\b\c\x\4\6\2\p\o\p\9\o\y\d\k\g\m\c\y\o\t\r\w\p\6\0\q\k\9\4\3\9\8\j\q\n\p\2\x\w\1\h\e\t\h\p\8\f\e\b\l\8\w\4\t\6\2\x\h\p\f\5\u\v\k ]]
00:13:25.506   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:25.506   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync
00:13:25.506  [2024-11-20 05:03:39.271503] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:25.506  [2024-11-20 05:03:39.271731] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131660 ]
00:13:25.506  [2024-11-20 05:03:39.407950] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:25.506  [2024-11-20 05:03:39.431482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:25.763  [2024-11-20 05:03:39.463802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:25.763  
[2024-11-20T05:03:39.978Z] Copying: 512/512 [B] (average 71 kBps)
00:13:26.021  
00:13:26.021   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6jmcb1dn0g9brr61jp1grfrybl6uif739qrpiuzxcsnfsddzjst38vusnt84ks43l3enq2jgfcoaj93sqt6mbcavw01nmi87wumsd0zqcr5h2iousc1d616z32184mk8761oxuop0kweri29zpgaa26vf5d1l7qnraeym83fs44bdeg2qiss4p24k9apldbp6fr83n62bwb2yn4avpfvdg7wntmigfxq5q0kcp7rltlup7qkajz6mh3qjhsfxwnp5a4641i11etowgx66lu66c96087k8jv8ucyqnixmgixv7w5kwj347n4cbwlke2nmi6x7f909lbitshfxzsdbygi76sac00k0jb14vv0wcaerlxrctdvsp7zxxvj0egmoc0zc1gbe6o0deb7gkxrrvftuu4zua3pidrgfl1eax3brly6um8hut2timzem09j3bcx462pop9oydkgmcyotrwp60qk94398jqnp2xw1hethp8febl8w4t62xhpf5uvk == \6\j\m\c\b\1\d\n\0\g\9\b\r\r\6\1\j\p\1\g\r\f\r\y\b\l\6\u\i\f\7\3\9\q\r\p\i\u\z\x\c\s\n\f\s\d\d\z\j\s\t\3\8\v\u\s\n\t\8\4\k\s\4\3\l\3\e\n\q\2\j\g\f\c\o\a\j\9\3\s\q\t\6\m\b\c\a\v\w\0\1\n\m\i\8\7\w\u\m\s\d\0\z\q\c\r\5\h\2\i\o\u\s\c\1\d\6\1\6\z\3\2\1\8\4\m\k\8\7\6\1\o\x\u\o\p\0\k\w\e\r\i\2\9\z\p\g\a\a\2\6\v\f\5\d\1\l\7\q\n\r\a\e\y\m\8\3\f\s\4\4\b\d\e\g\2\q\i\s\s\4\p\2\4\k\9\a\p\l\d\b\p\6\f\r\8\3\n\6\2\b\w\b\2\y\n\4\a\v\p\f\v\d\g\7\w\n\t\m\i\g\f\x\q\5\q\0\k\c\p\7\r\l\t\l\u\p\7\q\k\a\j\z\6\m\h\3\q\j\h\s\f\x\w\n\p\5\a\4\6\4\1\i\1\1\e\t\o\w\g\x\6\6\l\u\6\6\c\9\6\0\8\7\k\8\j\v\8\u\c\y\q\n\i\x\m\g\i\x\v\7\w\5\k\w\j\3\4\7\n\4\c\b\w\l\k\e\2\n\m\i\6\x\7\f\9\0\9\l\b\i\t\s\h\f\x\z\s\d\b\y\g\i\7\6\s\a\c\0\0\k\0\j\b\1\4\v\v\0\w\c\a\e\r\l\x\r\c\t\d\v\s\p\7\z\x\x\v\j\0\e\g\m\o\c\0\z\c\1\g\b\e\6\o\0\d\e\b\7\g\k\x\r\r\v\f\t\u\u\4\z\u\a\3\p\i\d\r\g\f\l\1\e\a\x\3\b\r\l\y\6\u\m\8\h\u\t\2\t\i\m\z\e\m\0\9\j\3\b\c\x\4\6\2\p\o\p\9\o\y\d\k\g\m\c\y\o\t\r\w\p\6\0\q\k\9\4\3\9\8\j\q\n\p\2\x\w\1\h\e\t\h\p\8\f\e\b\l\8\w\4\t\6\2\x\h\p\f\5\u\v\k ]]
00:13:26.021   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}"
00:13:26.021   05:03:39 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync
00:13:26.021  [2024-11-20 05:03:39.865023] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:26.021  [2024-11-20 05:03:39.865326] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131672 ]
00:13:26.278  [2024-11-20 05:03:40.015316] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:26.278  [2024-11-20 05:03:40.042403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:26.278  [2024-11-20 05:03:40.079651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:26.278  
[2024-11-20T05:03:40.492Z] Copying: 512/512 [B] (average 250 kBps)
00:13:26.535  
00:13:26.535   05:03:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 6jmcb1dn0g9brr61jp1grfrybl6uif739qrpiuzxcsnfsddzjst38vusnt84ks43l3enq2jgfcoaj93sqt6mbcavw01nmi87wumsd0zqcr5h2iousc1d616z32184mk8761oxuop0kweri29zpgaa26vf5d1l7qnraeym83fs44bdeg2qiss4p24k9apldbp6fr83n62bwb2yn4avpfvdg7wntmigfxq5q0kcp7rltlup7qkajz6mh3qjhsfxwnp5a4641i11etowgx66lu66c96087k8jv8ucyqnixmgixv7w5kwj347n4cbwlke2nmi6x7f909lbitshfxzsdbygi76sac00k0jb14vv0wcaerlxrctdvsp7zxxvj0egmoc0zc1gbe6o0deb7gkxrrvftuu4zua3pidrgfl1eax3brly6um8hut2timzem09j3bcx462pop9oydkgmcyotrwp60qk94398jqnp2xw1hethp8febl8w4t62xhpf5uvk == \6\j\m\c\b\1\d\n\0\g\9\b\r\r\6\1\j\p\1\g\r\f\r\y\b\l\6\u\i\f\7\3\9\q\r\p\i\u\z\x\c\s\n\f\s\d\d\z\j\s\t\3\8\v\u\s\n\t\8\4\k\s\4\3\l\3\e\n\q\2\j\g\f\c\o\a\j\9\3\s\q\t\6\m\b\c\a\v\w\0\1\n\m\i\8\7\w\u\m\s\d\0\z\q\c\r\5\h\2\i\o\u\s\c\1\d\6\1\6\z\3\2\1\8\4\m\k\8\7\6\1\o\x\u\o\p\0\k\w\e\r\i\2\9\z\p\g\a\a\2\6\v\f\5\d\1\l\7\q\n\r\a\e\y\m\8\3\f\s\4\4\b\d\e\g\2\q\i\s\s\4\p\2\4\k\9\a\p\l\d\b\p\6\f\r\8\3\n\6\2\b\w\b\2\y\n\4\a\v\p\f\v\d\g\7\w\n\t\m\i\g\f\x\q\5\q\0\k\c\p\7\r\l\t\l\u\p\7\q\k\a\j\z\6\m\h\3\q\j\h\s\f\x\w\n\p\5\a\4\6\4\1\i\1\1\e\t\o\w\g\x\6\6\l\u\6\6\c\9\6\0\8\7\k\8\j\v\8\u\c\y\q\n\i\x\m\g\i\x\v\7\w\5\k\w\j\3\4\7\n\4\c\b\w\l\k\e\2\n\m\i\6\x\7\f\9\0\9\l\b\i\t\s\h\f\x\z\s\d\b\y\g\i\7\6\s\a\c\0\0\k\0\j\b\1\4\v\v\0\w\c\a\e\r\l\x\r\c\t\d\v\s\p\7\z\x\x\v\j\0\e\g\m\o\c\0\z\c\1\g\b\e\6\o\0\d\e\b\7\g\k\x\r\r\v\f\t\u\u\4\z\u\a\3\p\i\d\r\g\f\l\1\e\a\x\3\b\r\l\y\6\u\m\8\h\u\t\2\t\i\m\z\e\m\0\9\j\3\b\c\x\4\6\2\p\o\p\9\o\y\d\k\g\m\c\y\o\t\r\w\p\6\0\q\k\9\4\3\9\8\j\q\n\p\2\x\w\1\h\e\t\h\p\8\f\e\b\l\8\w\4\t\6\2\x\h\p\f\5\u\v\k ]]
00:13:26.535  
00:13:26.535  real	0m4.967s
00:13:26.535  user	0m2.261s
00:13:26.535  sys	0m1.556s
00:13:26.535   05:03:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:26.535  ************************************
00:13:26.535  END TEST dd_flags_misc_forced_aio
00:13:26.535  ************************************
00:13:26.535   05:03:40 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x
00:13:26.535   05:03:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup
00:13:26.536   05:03:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link
00:13:26.536   05:03:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link
00:13:26.536  
00:13:26.536  real	0m22.188s
00:13:26.536  user	0m9.263s
00:13:26.536  sys	0m6.675s
00:13:26.536   05:03:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:26.536   05:03:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x
00:13:26.536  ************************************
00:13:26.536  END TEST spdk_dd_posix
00:13:26.536  ************************************
00:13:26.793   05:03:40 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:13:26.793   05:03:40 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:26.793   05:03:40 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:26.793   05:03:40 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:26.793  ************************************
00:13:26.793  START TEST spdk_dd_malloc
00:13:26.793  ************************************
00:13:26.793   05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh
00:13:26.793  * Looking for test storage...
00:13:26.793  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:26.793     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:26.793      05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version
00:13:26.793      05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:26.793     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-:
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-:
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<'
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:26.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:26.794  		--rc genhtml_branch_coverage=1
00:13:26.794  		--rc genhtml_function_coverage=1
00:13:26.794  		--rc genhtml_legend=1
00:13:26.794  		--rc geninfo_all_blocks=1
00:13:26.794  		--rc geninfo_unexecuted_blocks=1
00:13:26.794  		
00:13:26.794  		'
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:26.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:26.794  		--rc genhtml_branch_coverage=1
00:13:26.794  		--rc genhtml_function_coverage=1
00:13:26.794  		--rc genhtml_legend=1
00:13:26.794  		--rc geninfo_all_blocks=1
00:13:26.794  		--rc geninfo_unexecuted_blocks=1
00:13:26.794  		
00:13:26.794  		'
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:26.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:26.794  		--rc genhtml_branch_coverage=1
00:13:26.794  		--rc genhtml_function_coverage=1
00:13:26.794  		--rc genhtml_legend=1
00:13:26.794  		--rc geninfo_all_blocks=1
00:13:26.794  		--rc geninfo_unexecuted_blocks=1
00:13:26.794  		
00:13:26.794  		'
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:26.794  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:26.794  		--rc genhtml_branch_coverage=1
00:13:26.794  		--rc genhtml_function_coverage=1
00:13:26.794  		--rc genhtml_legend=1
00:13:26.794  		--rc geninfo_all_blocks=1
00:13:26.794  		--rc geninfo_unexecuted_blocks=1
00:13:26.794  		
00:13:26.794  		'
00:13:26.794    05:03:40 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:26.794     05:03:40 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH
00:13:26.794      05:03:40 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:13:26.794  ************************************
00:13:26.794  START TEST dd_malloc_copy
00:13:26.794  ************************************
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512')
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512')
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1
00:13:26.794   05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62
00:13:26.794    05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf
00:13:26.794    05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:13:26.794    05:03:40 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:13:26.794  [2024-11-20 05:03:40.743475] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:26.794  [2024-11-20 05:03:40.743789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131764 ]
00:13:26.794  {
00:13:26.794    "subsystems": [
00:13:26.794      {
00:13:26.794        "subsystem": "bdev",
00:13:26.794        "config": [
00:13:26.794          {
00:13:26.794            "params": {
00:13:26.794              "block_size": 512,
00:13:26.794              "num_blocks": 1048576,
00:13:26.794              "name": "malloc0"
00:13:26.794            },
00:13:26.794            "method": "bdev_malloc_create"
00:13:26.794          },
00:13:26.794          {
00:13:26.794            "params": {
00:13:26.794              "block_size": 512,
00:13:26.794              "num_blocks": 1048576,
00:13:26.794              "name": "malloc1"
00:13:26.794            },
00:13:26.794            "method": "bdev_malloc_create"
00:13:26.794          },
00:13:26.794          {
00:13:26.794            "method": "bdev_wait_for_examine"
00:13:26.794          }
00:13:26.794        ]
00:13:26.794      }
00:13:26.794    ]
00:13:26.794  }
00:13:27.052  [2024-11-20 05:03:40.897215] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:27.052  [2024-11-20 05:03:40.923969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:27.052  [2024-11-20 05:03:40.957371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:28.427  
[2024-11-20T05:03:43.761Z] Copying: 216/512 [MB] (216 MBps)
[2024-11-20T05:03:43.761Z] Copying: 431/512 [MB] (215 MBps)
[2024-11-20T05:03:44.328Z] Copying: 512/512 [MB] (average 215 MBps)
00:13:30.371  
00:13:30.371   05:03:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62
00:13:30.371    05:03:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf
00:13:30.371    05:03:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable
00:13:30.371    05:03:44 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:13:30.630  [2024-11-20 05:03:44.354227] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:30.630  [2024-11-20 05:03:44.354737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131821 ]
00:13:30.630  {
00:13:30.630    "subsystems": [
00:13:30.630      {
00:13:30.630        "subsystem": "bdev",
00:13:30.630        "config": [
00:13:30.630          {
00:13:30.630            "params": {
00:13:30.630              "block_size": 512,
00:13:30.630              "num_blocks": 1048576,
00:13:30.630              "name": "malloc0"
00:13:30.630            },
00:13:30.630            "method": "bdev_malloc_create"
00:13:30.630          },
00:13:30.630          {
00:13:30.630            "params": {
00:13:30.630              "block_size": 512,
00:13:30.630              "num_blocks": 1048576,
00:13:30.630              "name": "malloc1"
00:13:30.630            },
00:13:30.630            "method": "bdev_malloc_create"
00:13:30.630          },
00:13:30.630          {
00:13:30.630            "method": "bdev_wait_for_examine"
00:13:30.630          }
00:13:30.630        ]
00:13:30.630      }
00:13:30.630    ]
00:13:30.630  }
00:13:30.630  [2024-11-20 05:03:44.505163] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:30.630  [2024-11-20 05:03:44.530683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:30.630  [2024-11-20 05:03:44.569108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:32.008  
[2024-11-20T05:03:47.341Z] Copying: 217/512 [MB] (217 MBps)
[2024-11-20T05:03:47.341Z] Copying: 436/512 [MB] (218 MBps)
[2024-11-20T05:03:47.908Z] Copying: 512/512 [MB] (average 217 MBps)
00:13:33.951  
00:13:33.951  
00:13:33.951  real	0m7.197s
00:13:33.951  user	0m6.092s
00:13:33.951  sys	0m0.980s
00:13:33.951   05:03:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:33.951   05:03:47 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x
00:13:33.951  ************************************
00:13:33.951  END TEST dd_malloc_copy
00:13:33.951  ************************************
00:13:34.210  
00:13:34.210  real	0m7.408s
00:13:34.210  user	0m6.242s
00:13:34.210  sys	0m1.049s
00:13:34.210   05:03:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:34.210   05:03:47 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x
00:13:34.210  ************************************
00:13:34.210  END TEST spdk_dd_malloc
00:13:34.210  ************************************
00:13:34.210   05:03:47 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:13:34.210   05:03:47 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:34.210   05:03:47 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:34.210   05:03:47 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:34.210  ************************************
00:13:34.210  START TEST spdk_dd_bdev_to_bdev
00:13:34.210  ************************************
00:13:34.210   05:03:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0
00:13:34.210  * Looking for test storage...
00:13:34.210  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-:
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-:
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<'
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1
00:13:34.210     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1
00:13:34.210      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2
00:13:34.469      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2
00:13:34.469      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:34.469      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:34.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:34.469  		--rc genhtml_branch_coverage=1
00:13:34.469  		--rc genhtml_function_coverage=1
00:13:34.469  		--rc genhtml_legend=1
00:13:34.469  		--rc geninfo_all_blocks=1
00:13:34.469  		--rc geninfo_unexecuted_blocks=1
00:13:34.469  		
00:13:34.469  		'
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:34.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:34.469  		--rc genhtml_branch_coverage=1
00:13:34.469  		--rc genhtml_function_coverage=1
00:13:34.469  		--rc genhtml_legend=1
00:13:34.469  		--rc geninfo_all_blocks=1
00:13:34.469  		--rc geninfo_unexecuted_blocks=1
00:13:34.469  		
00:13:34.469  		'
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:34.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:34.469  		--rc genhtml_branch_coverage=1
00:13:34.469  		--rc genhtml_function_coverage=1
00:13:34.469  		--rc genhtml_legend=1
00:13:34.469  		--rc geninfo_all_blocks=1
00:13:34.469  		--rc geninfo_unexecuted_blocks=1
00:13:34.469  		
00:13:34.469  		'
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:34.469  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:34.469  		--rc genhtml_branch_coverage=1
00:13:34.469  		--rc genhtml_function_coverage=1
00:13:34.469  		--rc genhtml_legend=1
00:13:34.469  		--rc geninfo_all_blocks=1
00:13:34.469  		--rc geninfo_unexecuted_blocks=1
00:13:34.469  		
00:13:34.469  		'
00:13:34.469    05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:34.469     05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:34.470      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:34.470      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:34.470      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:34.470      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH
00:13:34.470      05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@")
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 1 > 1 ))
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0=Nvme0
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # bdev0=Nvme0n1
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@67 -- # nvme0_pci=0000:00:10.0
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # aio1=/home/vagrant/spdk_repo/spdk/test/dd/aio1
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@68 -- # bdev1=aio1
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie')
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@70 -- # declare -A method_bdev_nvme_attach_controller_1
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # method_bdev_aio_create_0=(['name']='aio1' ['filename']='/home/vagrant/spdk_repo/spdk/test/dd/aio1' ['block_size']='4096')
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@75 -- # declare -A method_bdev_aio_create_0
00:13:34.470   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/aio1 --bs=1048576 --count=256
00:13:34.470  [2024-11-20 05:03:48.222163] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:34.470  [2024-11-20 05:03:48.222392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131949 ]
00:13:34.470  [2024-11-20 05:03:48.356835] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:34.470  [2024-11-20 05:03:48.380591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:34.470  [2024-11-20 05:03:48.413257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:34.729  
[2024-11-20T05:03:48.945Z] Copying: 256/256 [MB] (average 1422 MBps)
00:13:34.988  
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it'
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it'
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:34.988  ************************************
00:13:34.988  START TEST dd_inflate_file
00:13:34.988  ************************************
00:13:34.988   05:03:48 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64
00:13:35.247  [2024-11-20 05:03:48.988891] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:35.247  [2024-11-20 05:03:48.989156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid131957 ]
00:13:35.247  [2024-11-20 05:03:49.139071] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:35.247  [2024-11-20 05:03:49.163843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:35.505  [2024-11-20 05:03:49.204982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:35.505  
[2024-11-20T05:03:49.720Z] Copying: 64/64 [MB] (average 1207 MBps)
00:13:35.763  
00:13:35.763  
00:13:35.763  real	0m0.685s
00:13:35.763  user	0m0.281s
00:13:35.763  sys	0m0.257s
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x
00:13:35.763  ************************************
00:13:35.763  END TEST dd_inflate_file
00:13:35.763  ************************************
00:13:35.763    05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:13:35.763    05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf
00:13:35.763    05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:13:35.763    05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:35.763  ************************************
00:13:35.763  START TEST dd_copy_to_out_bdev
00:13:35.763  ************************************
00:13:35.763   05:03:49 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62
00:13:36.022  {
00:13:36.022    "subsystems": [
00:13:36.022      {
00:13:36.022        "subsystem": "bdev",
00:13:36.022        "config": [
00:13:36.022          {
00:13:36.022            "params": {
00:13:36.022              "block_size": 4096,
00:13:36.022              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:36.022              "name": "aio1"
00:13:36.022            },
00:13:36.022            "method": "bdev_aio_create"
00:13:36.022          },
00:13:36.022          {
00:13:36.022            "params": {
00:13:36.022              "trtype": "pcie",
00:13:36.022              "traddr": "0000:00:10.0",
00:13:36.022              "name": "Nvme0"
00:13:36.022            },
00:13:36.022            "method": "bdev_nvme_attach_controller"
00:13:36.022          },
00:13:36.022          {
00:13:36.022            "method": "bdev_wait_for_examine"
00:13:36.022          }
00:13:36.022        ]
00:13:36.022      }
00:13:36.022    ]
00:13:36.022  }
00:13:36.022  [2024-11-20 05:03:49.741680] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:36.022  [2024-11-20 05:03:49.741931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132006 ]
00:13:36.022  [2024-11-20 05:03:49.891846] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:36.022  [2024-11-20 05:03:49.917795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:36.022  [2024-11-20 05:03:49.955671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:37.398  
[2024-11-20T05:03:51.920Z] Copying: 44/64 [MB] (44 MBps)
[2024-11-20T05:03:51.920Z] Copying: 64/64 [MB] (average 44 MBps)
00:13:37.963  
00:13:37.963  
00:13:37.963  real	0m2.229s
00:13:37.963  user	0m1.843s
00:13:37.963  sys	0m0.281s
00:13:37.963   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:37.963   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:37.963  ************************************
00:13:37.963  END TEST dd_copy_to_out_bdev
00:13:37.963  ************************************
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:38.223  ************************************
00:13:38.223  START TEST dd_offset_magic
00:13:38.223  ************************************
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64)
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:13:38.223   05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62
00:13:38.223    05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:13:38.223    05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:38.223    05:03:51 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:38.223  [2024-11-20 05:03:52.038137] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:38.223  [2024-11-20 05:03:52.038442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132057 ]
00:13:38.223  {
00:13:38.223    "subsystems": [
00:13:38.223      {
00:13:38.223        "subsystem": "bdev",
00:13:38.223        "config": [
00:13:38.223          {
00:13:38.223            "params": {
00:13:38.223              "block_size": 4096,
00:13:38.223              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:38.223              "name": "aio1"
00:13:38.223            },
00:13:38.223            "method": "bdev_aio_create"
00:13:38.223          },
00:13:38.223          {
00:13:38.223            "params": {
00:13:38.223              "trtype": "pcie",
00:13:38.223              "traddr": "0000:00:10.0",
00:13:38.223              "name": "Nvme0"
00:13:38.223            },
00:13:38.223            "method": "bdev_nvme_attach_controller"
00:13:38.223          },
00:13:38.223          {
00:13:38.223            "method": "bdev_wait_for_examine"
00:13:38.223          }
00:13:38.223        ]
00:13:38.223      }
00:13:38.223    ]
00:13:38.223  }
00:13:38.481  [2024-11-20 05:03:52.192471] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:38.481  [2024-11-20 05:03:52.211768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:38.481  [2024-11-20 05:03:52.257758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:39.420  
[2024-11-20T05:03:53.377Z] Copying: 65/65 [MB] (average 108 MBps)
00:13:39.420  
00:13:39.420   05:03:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62
00:13:39.420    05:03:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:13:39.420    05:03:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:39.420    05:03:53 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:39.679  [2024-11-20 05:03:53.411740] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:39.679  [2024-11-20 05:03:53.412031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132090 ]
00:13:39.679  {
00:13:39.679    "subsystems": [
00:13:39.679      {
00:13:39.679        "subsystem": "bdev",
00:13:39.679        "config": [
00:13:39.679          {
00:13:39.679            "params": {
00:13:39.679              "block_size": 4096,
00:13:39.679              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:39.679              "name": "aio1"
00:13:39.679            },
00:13:39.679            "method": "bdev_aio_create"
00:13:39.679          },
00:13:39.679          {
00:13:39.680            "params": {
00:13:39.680              "trtype": "pcie",
00:13:39.680              "traddr": "0000:00:10.0",
00:13:39.680              "name": "Nvme0"
00:13:39.680            },
00:13:39.680            "method": "bdev_nvme_attach_controller"
00:13:39.680          },
00:13:39.680          {
00:13:39.680            "method": "bdev_wait_for_examine"
00:13:39.680          }
00:13:39.680        ]
00:13:39.680      }
00:13:39.680    ]
00:13:39.680  }
00:13:39.680  [2024-11-20 05:03:53.562070] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:39.680  [2024-11-20 05:03:53.587210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:39.680  [2024-11-20 05:03:53.624843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:39.939  
[2024-11-20T05:03:54.155Z] Copying: 1024/1024 [kB] (average 500 MBps)
00:13:40.198  
00:13:40.198   05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:13:40.198   05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:13:40.198   05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}"
00:13:40.198   05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=aio1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62
00:13:40.198    05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf
00:13:40.198    05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:40.198    05:03:54 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:40.458  {
00:13:40.458    "subsystems": [
00:13:40.458      {
00:13:40.458        "subsystem": "bdev",
00:13:40.458        "config": [
00:13:40.458          {
00:13:40.458            "params": {
00:13:40.458              "block_size": 4096,
00:13:40.458              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:40.458              "name": "aio1"
00:13:40.458            },
00:13:40.458            "method": "bdev_aio_create"
00:13:40.458          },
00:13:40.458          {
00:13:40.458            "params": {
00:13:40.458              "trtype": "pcie",
00:13:40.458              "traddr": "0000:00:10.0",
00:13:40.458              "name": "Nvme0"
00:13:40.458            },
00:13:40.458            "method": "bdev_nvme_attach_controller"
00:13:40.458          },
00:13:40.458          {
00:13:40.458            "method": "bdev_wait_for_examine"
00:13:40.458          }
00:13:40.458        ]
00:13:40.458      }
00:13:40.458    ]
00:13:40.458  }
00:13:40.458  [2024-11-20 05:03:54.213935] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:40.458  [2024-11-20 05:03:54.214220] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132105 ]
00:13:40.458  [2024-11-20 05:03:54.366318] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:40.458  [2024-11-20 05:03:54.393347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:40.717  [2024-11-20 05:03:54.429001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:41.285  
[2024-11-20T05:03:55.500Z] Copying: 65/65 [MB] (average 156 MBps)
00:13:41.543  
00:13:41.544   05:03:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=aio1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62
00:13:41.544    05:03:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf
00:13:41.544    05:03:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable
00:13:41.544    05:03:55 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:41.544  [2024-11-20 05:03:55.388160] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:41.544  [2024-11-20 05:03:55.388365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132134 ]
00:13:41.544  {
00:13:41.544    "subsystems": [
00:13:41.544      {
00:13:41.544        "subsystem": "bdev",
00:13:41.544        "config": [
00:13:41.544          {
00:13:41.544            "params": {
00:13:41.544              "block_size": 4096,
00:13:41.544              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:41.544              "name": "aio1"
00:13:41.544            },
00:13:41.544            "method": "bdev_aio_create"
00:13:41.544          },
00:13:41.544          {
00:13:41.544            "params": {
00:13:41.544              "trtype": "pcie",
00:13:41.544              "traddr": "0000:00:10.0",
00:13:41.544              "name": "Nvme0"
00:13:41.544            },
00:13:41.544            "method": "bdev_nvme_attach_controller"
00:13:41.544          },
00:13:41.544          {
00:13:41.544            "method": "bdev_wait_for_examine"
00:13:41.544          }
00:13:41.544        ]
00:13:41.544      }
00:13:41.544    ]
00:13:41.544  }
00:13:41.803  [2024-11-20 05:03:55.522583] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:41.803  [2024-11-20 05:03:55.548287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:41.803  [2024-11-20 05:03:55.581181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:42.062  
[2024-11-20T05:03:56.278Z] Copying: 1024/1024 [kB] (average 1000 MBps)
00:13:42.321  
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]]
00:13:42.321  
00:13:42.321  real	0m4.102s
00:13:42.321  user	0m1.767s
00:13:42.321  sys	0m1.042s
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:42.321  ************************************
00:13:42.321  END TEST dd_offset_magic
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x
00:13:42.321  ************************************
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:13:42.321   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62
00:13:42.321    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:13:42.321    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:42.321    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:42.321  [2024-11-20 05:03:56.175194] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:42.321  [2024-11-20 05:03:56.175489] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132164 ]
00:13:42.321  {
00:13:42.321    "subsystems": [
00:13:42.321      {
00:13:42.321        "subsystem": "bdev",
00:13:42.321        "config": [
00:13:42.321          {
00:13:42.321            "params": {
00:13:42.321              "block_size": 4096,
00:13:42.321              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:42.321              "name": "aio1"
00:13:42.321            },
00:13:42.321            "method": "bdev_aio_create"
00:13:42.321          },
00:13:42.321          {
00:13:42.321            "params": {
00:13:42.321              "trtype": "pcie",
00:13:42.321              "traddr": "0000:00:10.0",
00:13:42.321              "name": "Nvme0"
00:13:42.321            },
00:13:42.321            "method": "bdev_nvme_attach_controller"
00:13:42.321          },
00:13:42.321          {
00:13:42.321            "method": "bdev_wait_for_examine"
00:13:42.321          }
00:13:42.321        ]
00:13:42.321      }
00:13:42.321    ]
00:13:42.321  }
00:13:42.594  [2024-11-20 05:03:56.324991] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:42.594  [2024-11-20 05:03:56.351883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:42.594  [2024-11-20 05:03:56.390564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:42.872  
[2024-11-20T05:03:57.107Z] Copying: 5120/5120 [kB] (average 1000 MBps)
00:13:43.150  
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme aio1 '' 4194330
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=aio1
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref=
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5
00:13:43.150   05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=aio1 --count=5 --json /dev/fd/62
00:13:43.150    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf
00:13:43.150    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:43.150    05:03:56 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:43.150  {
00:13:43.150    "subsystems": [
00:13:43.150      {
00:13:43.150        "subsystem": "bdev",
00:13:43.150        "config": [
00:13:43.150          {
00:13:43.150            "params": {
00:13:43.150              "block_size": 4096,
00:13:43.150              "filename": "/home/vagrant/spdk_repo/spdk/test/dd/aio1",
00:13:43.150              "name": "aio1"
00:13:43.150            },
00:13:43.150            "method": "bdev_aio_create"
00:13:43.150          },
00:13:43.150          {
00:13:43.150            "params": {
00:13:43.150              "trtype": "pcie",
00:13:43.150              "traddr": "0000:00:10.0",
00:13:43.150              "name": "Nvme0"
00:13:43.150            },
00:13:43.150            "method": "bdev_nvme_attach_controller"
00:13:43.150          },
00:13:43.150          {
00:13:43.150            "method": "bdev_wait_for_examine"
00:13:43.150          }
00:13:43.150        ]
00:13:43.150      }
00:13:43.150    ]
00:13:43.150  }
00:13:43.150  [2024-11-20 05:03:56.915524] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:43.150  [2024-11-20 05:03:56.915826] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132181 ]
00:13:43.150  [2024-11-20 05:03:57.068293] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:43.150  [2024-11-20 05:03:57.092091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:43.418  [2024-11-20 05:03:57.124918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:43.418  
[2024-11-20T05:03:57.633Z] Copying: 5120/5120 [kB] (average 200 MBps)
00:13:43.676  
00:13:43.935   05:03:57 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/aio1
00:13:43.935  
00:13:43.935  real	0m9.710s
00:13:43.935  user	0m5.235s
00:13:43.935  sys	0m2.549s
00:13:43.935   05:03:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:43.935   05:03:57 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:43.935  ************************************
00:13:43.935  END TEST spdk_dd_bdev_to_bdev
00:13:43.935  ************************************
00:13:43.935   05:03:57 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 ))
00:13:43.935   05:03:57 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:13:43.935   05:03:57 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:43.935   05:03:57 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:43.935   05:03:57 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:43.935  ************************************
00:13:43.935  START TEST spdk_dd_sparse
00:13:43.935  ************************************
00:13:43.935   05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh
00:13:43.935  * Looking for test storage...
00:13:43.935  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:43.935     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:43.935      05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version
00:13:43.935      05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-:
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-:
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:44.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.195  		--rc genhtml_branch_coverage=1
00:13:44.195  		--rc genhtml_function_coverage=1
00:13:44.195  		--rc genhtml_legend=1
00:13:44.195  		--rc geninfo_all_blocks=1
00:13:44.195  		--rc geninfo_unexecuted_blocks=1
00:13:44.195  		
00:13:44.195  		'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:44.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.195  		--rc genhtml_branch_coverage=1
00:13:44.195  		--rc genhtml_function_coverage=1
00:13:44.195  		--rc genhtml_legend=1
00:13:44.195  		--rc geninfo_all_blocks=1
00:13:44.195  		--rc geninfo_unexecuted_blocks=1
00:13:44.195  		
00:13:44.195  		'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:44.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.195  		--rc genhtml_branch_coverage=1
00:13:44.195  		--rc genhtml_function_coverage=1
00:13:44.195  		--rc genhtml_legend=1
00:13:44.195  		--rc geninfo_all_blocks=1
00:13:44.195  		--rc geninfo_unexecuted_blocks=1
00:13:44.195  		
00:13:44.195  		'
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:44.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:44.195  		--rc genhtml_branch_coverage=1
00:13:44.195  		--rc genhtml_function_coverage=1
00:13:44.195  		--rc genhtml_legend=1
00:13:44.195  		--rc geninfo_all_blocks=1
00:13:44.195  		--rc geninfo_unexecuted_blocks=1
00:13:44.195  		
00:13:44.195  		'
00:13:44.195    05:03:57 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:44.195     05:03:57 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH
00:13:44.195      05:03:57 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT
00:13:44.195   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1
00:13:44.196  1+0 records in
00:13:44.196  1+0 records out
00:13:44.196  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00989869 s, 424 MB/s
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4
00:13:44.196  1+0 records in
00:13:44.196  1+0 records out
00:13:44.196  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0103795 s, 404 MB/s
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8
00:13:44.196  1+0 records in
00:13:44.196  1+0 records out
00:13:44.196  4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00842483 s, 498 MB/s
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:44.196  ************************************
00:13:44.196  START TEST dd_sparse_file_to_file
00:13:44.196  ************************************
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore')
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1
00:13:44.196   05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62
00:13:44.196    05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf
00:13:44.196    05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable
00:13:44.196    05:03:57 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:44.196  [2024-11-20 05:03:58.060050] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:44.196  [2024-11-20 05:03:58.060446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132265 ]
00:13:44.196  {
00:13:44.196    "subsystems": [
00:13:44.196      {
00:13:44.196        "subsystem": "bdev",
00:13:44.196        "config": [
00:13:44.196          {
00:13:44.196            "params": {
00:13:44.196              "block_size": 4096,
00:13:44.196              "filename": "dd_sparse_aio_disk",
00:13:44.196              "name": "dd_aio"
00:13:44.196            },
00:13:44.196            "method": "bdev_aio_create"
00:13:44.196          },
00:13:44.196          {
00:13:44.196            "params": {
00:13:44.196              "lvs_name": "dd_lvstore",
00:13:44.196              "bdev_name": "dd_aio"
00:13:44.196            },
00:13:44.196            "method": "bdev_lvol_create_lvstore"
00:13:44.196          },
00:13:44.196          {
00:13:44.196            "method": "bdev_wait_for_examine"
00:13:44.196          }
00:13:44.196        ]
00:13:44.196      }
00:13:44.196    ]
00:13:44.196  }
00:13:44.454  [2024-11-20 05:03:58.213322] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:44.454  [2024-11-20 05:03:58.238384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:44.454  [2024-11-20 05:03:58.268501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:44.713  
[2024-11-20T05:03:58.929Z] Copying: 12/36 [MB] (average 1200 MBps)
00:13:44.972  
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]]
00:13:44.972  
00:13:44.972  real	0m0.774s
00:13:44.972  user	0m0.386s
00:13:44.972  sys	0m0.265s
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:44.972  ************************************
00:13:44.972  END TEST dd_sparse_file_to_file
00:13:44.972  ************************************
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:44.972  ************************************
00:13:44.972  START TEST dd_sparse_file_to_bdev
00:13:44.972  ************************************
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true')
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1
00:13:44.972   05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable
00:13:44.972    05:03:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:44.972  {
00:13:44.972    "subsystems": [
00:13:44.972      {
00:13:44.972        "subsystem": "bdev",
00:13:44.972        "config": [
00:13:44.972          {
00:13:44.972            "params": {
00:13:44.972              "block_size": 4096,
00:13:44.972              "filename": "dd_sparse_aio_disk",
00:13:44.972              "name": "dd_aio"
00:13:44.972            },
00:13:44.972            "method": "bdev_aio_create"
00:13:44.972          },
00:13:44.972          {
00:13:44.972            "params": {
00:13:44.972              "lvs_name": "dd_lvstore",
00:13:44.972              "lvol_name": "dd_lvol",
00:13:44.972              "size_in_mib": 36,
00:13:44.972              "thin_provision": true
00:13:44.972            },
00:13:44.972            "method": "bdev_lvol_create"
00:13:44.972          },
00:13:44.972          {
00:13:44.972            "method": "bdev_wait_for_examine"
00:13:44.972          }
00:13:44.972        ]
00:13:44.972      }
00:13:44.972    ]
00:13:44.972  }
00:13:44.972  [2024-11-20 05:03:58.880858] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:44.972  [2024-11-20 05:03:58.881710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132318 ]
00:13:45.231  [2024-11-20 05:03:59.033554] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:45.231  [2024-11-20 05:03:59.052673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:45.231  [2024-11-20 05:03:59.092748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:45.490  
[2024-11-20T05:03:59.705Z] Copying: 12/36 [MB] (average 800 MBps)
00:13:45.748  
00:13:45.748  
00:13:45.748  real	0m0.722s
00:13:45.748  user	0m0.372s
00:13:45.748  sys	0m0.234s
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x
00:13:45.748  ************************************
00:13:45.748  END TEST dd_sparse_file_to_bdev
00:13:45.748  ************************************
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:45.748  ************************************
00:13:45.748  START TEST dd_sparse_bdev_to_file
00:13:45.748  ************************************
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096')
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0
00:13:45.748   05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62
00:13:45.748    05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf
00:13:45.748    05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable
00:13:45.748    05:03:59 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:45.748  [2024-11-20 05:03:59.638946] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:45.748  {
00:13:45.748    "subsystems": [
00:13:45.748      {
00:13:45.748        "subsystem": "bdev",
00:13:45.748        "config": [
00:13:45.748          {
00:13:45.748            "params": {
00:13:45.748              "block_size": 4096,
00:13:45.748              "filename": "dd_sparse_aio_disk",
00:13:45.748              "name": "dd_aio"
00:13:45.748            },
00:13:45.748            "method": "bdev_aio_create"
00:13:45.748          },
00:13:45.748          {
00:13:45.748            "method": "bdev_wait_for_examine"
00:13:45.748          }
00:13:45.748        ]
00:13:45.748      }
00:13:45.748    ]
00:13:45.748  }
00:13:45.748  [2024-11-20 05:03:59.639226] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132363 ]
00:13:46.007  [2024-11-20 05:03:59.790742] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:46.007  [2024-11-20 05:03:59.816819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:46.007  [2024-11-20 05:03:59.857321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:46.265  
[2024-11-20T05:04:00.481Z] Copying: 12/36 [MB] (average 1000 MBps)
00:13:46.524  
00:13:46.524    05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2
00:13:46.524   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736
00:13:46.524    05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3
00:13:46.524   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]]
00:13:46.525    05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576
00:13:46.525    05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]]
00:13:46.525  
00:13:46.525  real	0m0.722s
00:13:46.525  user	0m0.380s
00:13:46.525  sys	0m0.239s
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x
00:13:46.525  ************************************
00:13:46.525  END TEST dd_sparse_bdev_to_file
00:13:46.525  ************************************
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3
00:13:46.525  
00:13:46.525  real	0m2.627s
00:13:46.525  user	0m1.368s
00:13:46.525  sys	0m0.914s
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:46.525   05:04:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x
00:13:46.525  ************************************
00:13:46.525  END TEST spdk_dd_sparse
00:13:46.525  ************************************
00:13:46.525   05:04:00 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:13:46.525   05:04:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:46.525   05:04:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:46.525   05:04:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:46.525  ************************************
00:13:46.525  START TEST spdk_dd_negative
00:13:46.525  ************************************
00:13:46.525   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh
00:13:46.785  * Looking for test storage...
00:13:46.785  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-:
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-:
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:46.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:46.785  		--rc genhtml_branch_coverage=1
00:13:46.785  		--rc genhtml_function_coverage=1
00:13:46.785  		--rc genhtml_legend=1
00:13:46.785  		--rc geninfo_all_blocks=1
00:13:46.785  		--rc geninfo_unexecuted_blocks=1
00:13:46.785  		
00:13:46.785  		'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:46.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:46.785  		--rc genhtml_branch_coverage=1
00:13:46.785  		--rc genhtml_function_coverage=1
00:13:46.785  		--rc genhtml_legend=1
00:13:46.785  		--rc geninfo_all_blocks=1
00:13:46.785  		--rc geninfo_unexecuted_blocks=1
00:13:46.785  		
00:13:46.785  		'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:46.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:46.785  		--rc genhtml_branch_coverage=1
00:13:46.785  		--rc genhtml_function_coverage=1
00:13:46.785  		--rc genhtml_legend=1
00:13:46.785  		--rc geninfo_all_blocks=1
00:13:46.785  		--rc geninfo_unexecuted_blocks=1
00:13:46.785  		
00:13:46.785  		'
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:46.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:46.785  		--rc genhtml_branch_coverage=1
00:13:46.785  		--rc genhtml_function_coverage=1
00:13:46.785  		--rc genhtml_legend=1
00:13:46.785  		--rc geninfo_all_blocks=1
00:13:46.785  		--rc geninfo_unexecuted_blocks=1
00:13:46.785  		
00:13:46.785  		'
00:13:46.785    05:04:00 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:13:46.785     05:04:00 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:46.785      05:04:00 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:46.786      05:04:00 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:46.786      05:04:00 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH
00:13:46.786      05:04:00 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:46.786  ************************************
00:13:46.786  START TEST dd_invalid_arguments
00:13:46.786  ************************************
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:46.786    05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:46.786    05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:46.786   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob=
00:13:46.786  /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options]
00:13:46.786  
00:13:46.786  CPU options:
00:13:46.786   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:13:46.786                                   (like [0,1,10])
00:13:46.786       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:13:46.786                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:13:46.786                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:13:46.786                             Within the group, '-' is used for range separator,
00:13:46.786                             ',' is used for single number separator.
00:13:46.786                             '( )' can be omitted for single element group,
00:13:46.786                             '@' can be omitted if cpus and lcores have the same value
00:13:46.786       --disable-cpumask-locks    Disable CPU core lock files.
00:13:46.786       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:13:46.786                             pollers in the app support interrupt mode)
00:13:46.786   -p, --main-core <id>      main (primary) core for DPDK
00:13:46.786  
00:13:46.786  Configuration options:
00:13:46.786   -c, --config, --json  <config>     JSON config file
00:13:46.786   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:13:46.786       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:13:46.786       --wait-for-rpc        wait for RPCs to initialize subsystems
00:13:46.786       --rpcs-allowed	   comma-separated list of permitted RPCS
00:13:46.786       --json-ignore-init-errors    don't exit on invalid config entry
00:13:46.786  
00:13:46.786  Memory options:
00:13:46.786       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:13:46.786       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:13:46.786       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:13:46.786   -R, --huge-unlink         unlink huge files after initialization
00:13:46.786   -n, --mem-channels <num>  number of memory channels used for DPDK
00:13:46.786   -s, --mem-size <size>     memory size in MB for DPDK (default: 0MB)
00:13:46.786       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:13:46.786       --no-huge             run without using hugepages
00:13:46.786       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:13:46.786   -i, --shm-id <id>         shared memory ID (optional)
00:13:46.786   -g, --single-file-segments   force creating just one hugetlbfs file
00:13:46.786  
00:13:46.786  PCI options:
00:13:46.786   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:13:46.786   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:13:46.786   -u, --no-pci              disable PCI access
00:13:46.786       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:13:46.786  
00:13:46.786  Log options:
00:13:46.786   -L, --logflag <flag>      enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 
00:13:46.786                             app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 
00:13:46.786                             bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 
00:13:46.786                             blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 
00:13:46.786                             blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 
00:13:46.786                             iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 
00:13:46.786                             nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 
00:13:46.786                             sock_posix, spdk_aio_mgr_io, thread, trace, vbdev_delay, vbdev_gpt, 
00:13:46.786                             vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, 
00:13:46.786                             vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, 
00:13:46.786                             virtio_user, virtio_vfio_user, vmd)
00:13:46.786       --silence-noticelog   disable notice level logging to stderr
00:13:46.786  
00:13:46.786  Trace options:
00:13:46.786       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:13:46.786                                   setting 0 to disable trace (default 32768)
00:13:46.786                                   Tracepoints vary in size and can use more than one trace entry.
00:13:46.786   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:13:46.786               /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii='
00:13:46.786  [2024-11-20 05:04:00.686713] spdk_dd.c:1480:main: *ERROR*: Invalid arguments
00:13:46.786                group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 
00:13:46.786                             blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 
00:13:46.786                             bdev_raid, scheduler, all).
00:13:46.786                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:13:46.786                             a tracepoint group. First tpoint inside a group can be enabled by
00:13:46.786                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:13:46.786                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:13:46.786                             in /include/spdk_internal/trace_defs.h
00:13:46.786  
00:13:46.786  Other options:
00:13:46.786   -h, --help                show this usage
00:13:46.786   -v, --version             print SPDK version
00:13:46.786   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:13:46.786       --env-context         Opaque context for use of the env implementation
00:13:46.786  
00:13:46.786  Application specific:
00:13:46.786  [--------- DD Options ---------]
00:13:46.786   --if Input file. Must specify either --if or --ib.
00:13:46.786   --ib Input bdev. Must specifier either --if or --ib
00:13:46.786   --of Output file. Must specify either --of or --ob.
00:13:46.786   --ob Output bdev. Must specify either --of or --ob.
00:13:46.786   --iflag Input file flags.
00:13:46.786   --oflag Output file flags.
00:13:46.786   --bs I/O unit size (default: 4096)
00:13:46.786   --qd Queue depth (default: 2)
00:13:46.786   --count I/O unit count. The number of I/O units to copy. (default: all)
00:13:46.786   --skip Skip this many I/O units at start of input. (default: 0)
00:13:46.786   --seek Skip this many I/O units at start of output. (default: 0)
00:13:46.786   --aio Force usage of AIO. (by default io_uring is used if available)
00:13:46.786   --sparse Enable hole skipping in input target
00:13:46.786   Available iflag and oflag values:
00:13:46.786    append - append mode
00:13:46.786    direct - use direct I/O for data
00:13:46.786    directory - fail unless a directory
00:13:46.786    dsync - use synchronized I/O for data
00:13:46.786    noatime - do not update access time
00:13:46.786    noctty - do not assign controlling terminal from file
00:13:46.786    nofollow - do not follow symlinks
00:13:46.786    nonblock - use non-blocking I/O
00:13:46.786    sync - use synchronized I/O for data and metadata
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.046  
00:13:47.046  real	0m0.125s
00:13:47.046  user	0m0.066s
00:13:47.046  sys	0m0.059s
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.046  ************************************
00:13:47.046  END TEST dd_invalid_arguments
00:13:47.046  ************************************
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.046  ************************************
00:13:47.046  START TEST dd_double_input
00:13:47.046  ************************************
00:13:47.046   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047    05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047    05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob=
00:13:47.047  [2024-11-20 05:04:00.860462] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both.
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.047  
00:13:47.047  real	0m0.111s
00:13:47.047  user	0m0.067s
00:13:47.047  sys	0m0.044s
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x
00:13:47.047  ************************************
00:13:47.047  END TEST dd_double_input
00:13:47.047  ************************************
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.047  ************************************
00:13:47.047  START TEST dd_double_output
00:13:47.047  ************************************
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047    05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047    05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.047   05:04:00 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob=
00:13:47.307  [2024-11-20 05:04:01.023390] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both.
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.307  
00:13:47.307  real	0m0.123s
00:13:47.307  user	0m0.060s
00:13:47.307  sys	0m0.064s
00:13:47.307  ************************************
00:13:47.307  END TEST dd_double_output
00:13:47.307  ************************************
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.307  ************************************
00:13:47.307  START TEST dd_no_input
00:13:47.307  ************************************
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.307    05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.307    05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob=
00:13:47.307  [2024-11-20 05:04:01.195122] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.307  
00:13:47.307  real	0m0.121s
00:13:47.307  user	0m0.074s
00:13:47.307  sys	0m0.047s
00:13:47.307  ************************************
00:13:47.307  END TEST dd_no_input
00:13:47.307  ************************************
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.307   05:04:01 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.567  ************************************
00:13:47.567  START TEST dd_no_output
00:13:47.567  ************************************
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567    05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567    05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0
00:13:47.567  [2024-11-20 05:04:01.370442] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.567  
00:13:47.567  real	0m0.119s
00:13:47.567  user	0m0.066s
00:13:47.567  sys	0m0.053s
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.567  ************************************
00:13:47.567  END TEST dd_no_output
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x
00:13:47.567  ************************************
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.567  ************************************
00:13:47.567  START TEST dd_wrong_blocksize
00:13:47.567  ************************************
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567    05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567    05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.567   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0
00:13:47.825  [2024-11-20 05:04:01.545171] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:47.825  
00:13:47.825  real	0m0.115s
00:13:47.825  user	0m0.051s
00:13:47.825  sys	0m0.065s
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x
00:13:47.825  ************************************
00:13:47.825  END TEST dd_wrong_blocksize
00:13:47.825  ************************************
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:47.825  ************************************
00:13:47.825  START TEST dd_smaller_blocksize
00:13:47.825  ************************************
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.825    05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.825   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.825    05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.826   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:47.826   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:47.826   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:47.826   05:04:01 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999
00:13:47.826  [2024-11-20 05:04:01.724910] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:47.826  [2024-11-20 05:04:01.725214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132633 ]
00:13:48.084  [2024-11-20 05:04:01.876488] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:48.084  [2024-11-20 05:04:01.901928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:48.084  [2024-11-20 05:04:01.941504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:48.342  [2024-11-20 05:04:02.126081] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value
00:13:48.342  [2024-11-20 05:04:02.126181] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:48.342  [2024-11-20 05:04:02.245492] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:48.601  
00:13:48.601  real	0m0.685s
00:13:48.601  user	0m0.316s
00:13:48.601  sys	0m0.269s
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x
00:13:48.601  ************************************
00:13:48.601  END TEST dd_smaller_blocksize
00:13:48.601  ************************************
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:48.601  ************************************
00:13:48.601  START TEST dd_invalid_count
00:13:48.601  ************************************
00:13:48.601   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.602    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.602    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9
00:13:48.602  [2024-11-20 05:04:02.465464] spdk_dd.c:1517:main: *ERROR*: Invalid --count value
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:48.602  
00:13:48.602  real	0m0.125s
00:13:48.602  user	0m0.048s
00:13:48.602  sys	0m0.078s
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:48.602   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x
00:13:48.602  ************************************
00:13:48.602  END TEST dd_invalid_count
00:13:48.602  ************************************
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:48.861  ************************************
00:13:48.861  START TEST dd_invalid_oflag
00:13:48.861  ************************************
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.861    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.861    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:48.861   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0
00:13:48.861  [2024-11-20 05:04:02.655325] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:48.862  
00:13:48.862  real	0m0.125s
00:13:48.862  user	0m0.063s
00:13:48.862  sys	0m0.062s
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x
00:13:48.862  ************************************
00:13:48.862  END TEST dd_invalid_oflag
00:13:48.862  ************************************
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:48.862  ************************************
00:13:48.862  START TEST dd_invalid_iflag
00:13:48.862  ************************************
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.862    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.862    05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:48.862   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0
00:13:49.121  [2024-11-20 05:04:02.831843] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:49.121  
00:13:49.121  real	0m0.111s
00:13:49.121  user	0m0.047s
00:13:49.121  sys	0m0.064s
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x
00:13:49.121  ************************************
00:13:49.121  END TEST dd_invalid_iflag
00:13:49.121  ************************************
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:49.121  ************************************
00:13:49.121  START TEST dd_unknown_flag
00:13:49.121  ************************************
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.121    05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.121    05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:49.121   05:04:02 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1
00:13:49.121  [2024-11-20 05:04:02.993823] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:49.122  [2024-11-20 05:04:02.994129] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132742 ]
00:13:49.380  [2024-11-20 05:04:03.144026] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:49.380  [2024-11-20 05:04:03.171052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:49.380  [2024-11-20 05:04:03.209848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:49.380  [2024-11-20 05:04:03.298955] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1
00:13:49.380  [2024-11-20 05:04:03.299051] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:49.380  
[2024-11-20T05:04:03.337Z] Copying: 0/0 [B] (average 0 Bps)[2024-11-20 05:04:03.299259] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice
00:13:49.639  [2024-11-20 05:04:03.413434] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:49.639  
00:13:49.639  
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:49.639  
00:13:49.639  real	0m0.620s
00:13:49.639  user	0m0.282s
00:13:49.639  sys	0m0.203s
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x
00:13:49.639  ************************************
00:13:49.639  END TEST dd_unknown_flag
00:13:49.639  ************************************
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:49.639   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:49.898  ************************************
00:13:49.898  START TEST dd_invalid_json
00:13:49.898  ************************************
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.898    05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # :
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.898    05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.898    05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:49.898   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62
00:13:49.898  [2024-11-20 05:04:03.670236] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:49.898  [2024-11-20 05:04:03.670517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132776 ]
00:13:49.898  [2024-11-20 05:04:03.822753] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:49.898  [2024-11-20 05:04:03.845722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:50.158  [2024-11-20 05:04:03.876954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:50.158  [2024-11-20 05:04:03.877064] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty
00:13:50.158  [2024-11-20 05:04:03.877099] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:13:50.158  [2024-11-20 05:04:03.877136] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:50.158  [2024-11-20 05:04:03.877235] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:50.158  
00:13:50.158  real	0m0.353s
00:13:50.158  user	0m0.130s
00:13:50.158  sys	0m0.127s
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:50.158  ************************************
00:13:50.158  END TEST dd_invalid_json
00:13:50.158  ************************************
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:50.158   05:04:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:50.158  ************************************
00:13:50.158  START TEST dd_invalid_seek
00:13:50.158  ************************************
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0
00:13:50.158    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.158    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable
00:13:50.158    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.158    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.158    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:50.158   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512
00:13:50.158  {
00:13:50.158    "subsystems": [
00:13:50.158      {
00:13:50.158        "subsystem": "bdev",
00:13:50.158        "config": [
00:13:50.158          {
00:13:50.158            "params": {
00:13:50.158              "block_size": 512,
00:13:50.158              "num_blocks": 512,
00:13:50.158              "name": "malloc0"
00:13:50.158            },
00:13:50.158            "method": "bdev_malloc_create"
00:13:50.158          },
00:13:50.158          {
00:13:50.158            "params": {
00:13:50.158              "block_size": 512,
00:13:50.158              "num_blocks": 512,
00:13:50.158              "name": "malloc1"
00:13:50.158            },
00:13:50.158            "method": "bdev_malloc_create"
00:13:50.158          },
00:13:50.158          {
00:13:50.158            "method": "bdev_wait_for_examine"
00:13:50.158          }
00:13:50.158        ]
00:13:50.158      }
00:13:50.158    ]
00:13:50.158  }
00:13:50.158  [2024-11-20 05:04:04.075741] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:50.158  [2024-11-20 05:04:04.076149] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132810 ]
00:13:50.417  [2024-11-20 05:04:04.224437] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:50.417  [2024-11-20 05:04:04.250001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:50.417  [2024-11-20 05:04:04.284264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:50.676  [2024-11-20 05:04:04.394771] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output
00:13:50.676  [2024-11-20 05:04:04.394884] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:50.676  [2024-11-20 05:04:04.509920] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:50.676  
00:13:50.676  real	0m0.596s
00:13:50.676  user	0m0.332s
00:13:50.676  sys	0m0.220s
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:50.676   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x
00:13:50.676  ************************************
00:13:50.676  END TEST dd_invalid_seek
00:13:50.676  ************************************
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:50.935  ************************************
00:13:50.935  START TEST dd_invalid_skip
00:13:50.935  ************************************
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:50.935    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0
00:13:50.935    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:50.935    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.935    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.935    05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:50.935   05:04:04 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512
00:13:50.935  {
00:13:50.935    "subsystems": [
00:13:50.935      {
00:13:50.935        "subsystem": "bdev",
00:13:50.935        "config": [
00:13:50.935          {
00:13:50.935            "params": {
00:13:50.935              "block_size": 512,
00:13:50.935              "num_blocks": 512,
00:13:50.935              "name": "malloc0"
00:13:50.935            },
00:13:50.935            "method": "bdev_malloc_create"
00:13:50.935          },
00:13:50.935          {
00:13:50.935            "params": {
00:13:50.935              "block_size": 512,
00:13:50.935              "num_blocks": 512,
00:13:50.935              "name": "malloc1"
00:13:50.935            },
00:13:50.935            "method": "bdev_malloc_create"
00:13:50.935          },
00:13:50.935          {
00:13:50.935            "method": "bdev_wait_for_examine"
00:13:50.935          }
00:13:50.935        ]
00:13:50.935      }
00:13:50.935    ]
00:13:50.935  }
00:13:50.935  [2024-11-20 05:04:04.734991] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:50.935  [2024-11-20 05:04:04.735300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132850 ]
00:13:50.935  [2024-11-20 05:04:04.887030] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:51.194  [2024-11-20 05:04:04.909620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:51.194  [2024-11-20 05:04:04.944717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:51.194  [2024-11-20 05:04:05.051011] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input
00:13:51.194  [2024-11-20 05:04:05.051115] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:51.453  [2024-11-20 05:04:05.166271] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:51.453  
00:13:51.453  real	0m0.588s
00:13:51.453  user	0m0.311s
00:13:51.453  sys	0m0.239s
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x
00:13:51.453  ************************************
00:13:51.453  END TEST dd_invalid_skip
00:13:51.453  ************************************
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:51.453  ************************************
00:13:51.453  START TEST dd_invalid_input_count
00:13:51.453  ************************************
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:51.453    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:51.453    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable
00:13:51.453    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:51.453    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:51.453    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:51.453   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512
00:13:51.453  {
00:13:51.453    "subsystems": [
00:13:51.453      {
00:13:51.453        "subsystem": "bdev",
00:13:51.453        "config": [
00:13:51.453          {
00:13:51.453            "params": {
00:13:51.453              "block_size": 512,
00:13:51.453              "num_blocks": 512,
00:13:51.453              "name": "malloc0"
00:13:51.453            },
00:13:51.453            "method": "bdev_malloc_create"
00:13:51.453          },
00:13:51.453          {
00:13:51.453            "params": {
00:13:51.453              "block_size": 512,
00:13:51.453              "num_blocks": 512,
00:13:51.453              "name": "malloc1"
00:13:51.453            },
00:13:51.453            "method": "bdev_malloc_create"
00:13:51.453          },
00:13:51.453          {
00:13:51.453            "method": "bdev_wait_for_examine"
00:13:51.453          }
00:13:51.453        ]
00:13:51.453      }
00:13:51.453    ]
00:13:51.453  }
00:13:51.453  [2024-11-20 05:04:05.373712] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:51.453  [2024-11-20 05:04:05.374106] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132888 ]
00:13:51.713  [2024-11-20 05:04:05.525625] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:51.713  [2024-11-20 05:04:05.551743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:51.713  [2024-11-20 05:04:05.600620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:51.972  [2024-11-20 05:04:05.717452] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input
00:13:51.972  [2024-11-20 05:04:05.717554] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:51.972  [2024-11-20 05:04:05.832513] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1
00:13:51.972   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:51.972  
00:13:51.972  real	0m0.621s
00:13:51.972  user	0m0.339s
00:13:51.972  sys	0m0.246s
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x
00:13:52.231  ************************************
00:13:52.231  END TEST dd_invalid_input_count
00:13:52.231  ************************************
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:52.231  ************************************
00:13:52.231  START TEST dd_invalid_output_count
00:13:52.231  ************************************
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.231    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf
00:13:52.231    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable
00:13:52.231    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:13:52.231   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.232    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.232   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.232    05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.232   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.232   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.232   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:52.232   05:04:05 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512
00:13:52.232  {
00:13:52.232    "subsystems": [
00:13:52.232      {
00:13:52.232        "subsystem": "bdev",
00:13:52.232        "config": [
00:13:52.232          {
00:13:52.232            "params": {
00:13:52.232              "block_size": 512,
00:13:52.232              "num_blocks": 512,
00:13:52.232              "name": "malloc0"
00:13:52.232            },
00:13:52.232            "method": "bdev_malloc_create"
00:13:52.232          },
00:13:52.232          {
00:13:52.232            "method": "bdev_wait_for_examine"
00:13:52.232          }
00:13:52.232        ]
00:13:52.232      }
00:13:52.232    ]
00:13:52.232  }
00:13:52.232  [2024-11-20 05:04:06.063684] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:52.232  [2024-11-20 05:04:06.064103] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132936 ]
00:13:52.490  [2024-11-20 05:04:06.231124] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:52.490  [2024-11-20 05:04:06.256650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:52.490  [2024-11-20 05:04:06.295847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:52.490  [2024-11-20 05:04:06.404373] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output
00:13:52.490  [2024-11-20 05:04:06.404495] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:52.749  [2024-11-20 05:04:06.519338] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:52.749  
00:13:52.749  real	0m0.630s
00:13:52.749  user	0m0.362s
00:13:52.749  sys	0m0.225s
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x
00:13:52.749  ************************************
00:13:52.749  END TEST dd_invalid_output_count
00:13:52.749  ************************************
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:52.749  ************************************
00:13:52.749  START TEST dd_bs_not_multiple
00:13:52.749  ************************************
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512')
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512')
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:52.749    05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.749    05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable
00:13:52.749    05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.749    05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.749    05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]]
00:13:52.749   05:04:06 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62
00:13:53.008  [2024-11-20 05:04:06.716758] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:53.008  [2024-11-20 05:04:06.716971] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132975 ]
00:13:53.008  {
00:13:53.008    "subsystems": [
00:13:53.008      {
00:13:53.008        "subsystem": "bdev",
00:13:53.008        "config": [
00:13:53.008          {
00:13:53.008            "params": {
00:13:53.008              "block_size": 512,
00:13:53.008              "num_blocks": 512,
00:13:53.008              "name": "malloc0"
00:13:53.008            },
00:13:53.008            "method": "bdev_malloc_create"
00:13:53.008          },
00:13:53.008          {
00:13:53.008            "params": {
00:13:53.008              "block_size": 512,
00:13:53.008              "num_blocks": 512,
00:13:53.008              "name": "malloc1"
00:13:53.008            },
00:13:53.008            "method": "bdev_malloc_create"
00:13:53.008          },
00:13:53.008          {
00:13:53.008            "method": "bdev_wait_for_examine"
00:13:53.008          }
00:13:53.008        ]
00:13:53.008      }
00:13:53.008    ]
00:13:53.008  }
00:13:53.008  [2024-11-20 05:04:06.851820] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:53.008  [2024-11-20 05:04:06.876216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:53.008  [2024-11-20 05:04:06.909702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:53.266  [2024-11-20 05:04:07.018845] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512)
00:13:53.266  [2024-11-20 05:04:07.018954] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:13:53.266  [2024-11-20 05:04:07.133826] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy
00:13:53.266   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:13:53.525  
00:13:53.525  real	0m0.559s
00:13:53.525  user	0m0.319s
00:13:53.525  sys	0m0.205s
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x
00:13:53.525  ************************************
00:13:53.525  END TEST dd_bs_not_multiple
00:13:53.525  ************************************
00:13:53.525  
00:13:53.525  real	0m6.842s
00:13:53.525  user	0m3.576s
00:13:53.525  sys	0m2.684s
00:13:53.525  ************************************
00:13:53.525  END TEST spdk_dd_negative
00:13:53.525  ************************************
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:53.525   05:04:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x
00:13:53.525  ************************************
00:13:53.525  END TEST spdk_dd
00:13:53.525  ************************************
00:13:53.525  
00:13:53.525  real	1m9.641s
00:13:53.525  user	0m38.327s
00:13:53.525  sys	0m20.238s
00:13:53.526   05:04:07 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:53.526   05:04:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x
00:13:53.526   05:04:07  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:13:53.526   05:04:07  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:13:53.526   05:04:07  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:53.526   05:04:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:53.526   05:04:07  -- common/autotest_common.sh@10 -- # set +x
00:13:53.526  ************************************
00:13:53.526  START TEST blockdev_nvme
00:13:53.526  ************************************
00:13:53.526   05:04:07 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:13:53.526  * Looking for test storage...
00:13:53.526  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:13:53.526    05:04:07 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:13:53.526     05:04:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:13:53.526     05:04:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:13:53.785     05:04:07 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:13:53.785    05:04:07 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:13:53.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.785  		--rc genhtml_branch_coverage=1
00:13:53.785  		--rc genhtml_function_coverage=1
00:13:53.785  		--rc genhtml_legend=1
00:13:53.785  		--rc geninfo_all_blocks=1
00:13:53.785  		--rc geninfo_unexecuted_blocks=1
00:13:53.785  		
00:13:53.785  		'
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:13:53.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.785  		--rc genhtml_branch_coverage=1
00:13:53.785  		--rc genhtml_function_coverage=1
00:13:53.785  		--rc genhtml_legend=1
00:13:53.785  		--rc geninfo_all_blocks=1
00:13:53.785  		--rc geninfo_unexecuted_blocks=1
00:13:53.785  		
00:13:53.785  		'
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:13:53.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.785  		--rc genhtml_branch_coverage=1
00:13:53.785  		--rc genhtml_function_coverage=1
00:13:53.785  		--rc genhtml_legend=1
00:13:53.785  		--rc geninfo_all_blocks=1
00:13:53.785  		--rc geninfo_unexecuted_blocks=1
00:13:53.785  		
00:13:53.785  		'
00:13:53.785    05:04:07 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:13:53.785  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:13:53.785  		--rc genhtml_branch_coverage=1
00:13:53.785  		--rc genhtml_function_coverage=1
00:13:53.785  		--rc genhtml_legend=1
00:13:53.785  		--rc geninfo_all_blocks=1
00:13:53.785  		--rc geninfo_unexecuted_blocks=1
00:13:53.785  		
00:13:53.785  		'
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:13:53.785    05:04:07 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:13:53.785    05:04:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device=
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek=
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx=
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]]
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]]
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=133074
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 133074
00:13:53.785   05:04:07 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:13:53.785   05:04:07 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 133074 ']'
00:13:53.786   05:04:07 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:53.786   05:04:07 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:53.786  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:53.786   05:04:07 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:53.786   05:04:07 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:53.786   05:04:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:53.786  [2024-11-20 05:04:07.648303] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:53.786  [2024-11-20 05:04:07.649141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133074 ]
00:13:54.044  [2024-11-20 05:04:07.800640] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:54.044  [2024-11-20 05:04:07.819381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:54.044  [2024-11-20 05:04:07.856081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "3c68f93e-a643-4324-b7f8-7804847ba70b"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "3c68f93e-a643-4324-b7f8-7804847ba70b",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:13:54.980    05:04:08 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:13:54.980   05:04:08 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 133074
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 133074 ']'
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 133074
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:54.980    05:04:08 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133074
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:54.980  killing process with pid 133074
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133074'
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 133074
00:13:54.980   05:04:08 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 133074
00:13:55.547   05:04:09 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:13:55.547   05:04:09 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:13:55.547   05:04:09 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:13:55.547   05:04:09 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:55.547   05:04:09 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:55.547  ************************************
00:13:55.547  START TEST bdev_hello_world
00:13:55.547  ************************************
00:13:55.547   05:04:09 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:13:55.547  [2024-11-20 05:04:09.416101] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:55.547  [2024-11-20 05:04:09.416391] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133144 ]
00:13:55.806  [2024-11-20 05:04:09.566675] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:55.806  [2024-11-20 05:04:09.592418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:55.806  [2024-11-20 05:04:09.634785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:56.063  [2024-11-20 05:04:09.845974] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:13:56.063  [2024-11-20 05:04:09.846036] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:13:56.063  [2024-11-20 05:04:09.846106] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:13:56.063  [2024-11-20 05:04:09.848485] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:13:56.063  [2024-11-20 05:04:09.849090] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:13:56.063  [2024-11-20 05:04:09.849154] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:13:56.063  [2024-11-20 05:04:09.849471] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:13:56.063  
00:13:56.063  [2024-11-20 05:04:09.849527] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:13:56.321  
00:13:56.321  real	0m0.708s
00:13:56.321  user	0m0.384s
00:13:56.321  sys	0m0.224s
00:13:56.321   05:04:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:56.321  ************************************
00:13:56.321  END TEST bdev_hello_world
00:13:56.321   05:04:10 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:13:56.321  ************************************
00:13:56.321   05:04:10 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:13:56.321   05:04:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:13:56.321   05:04:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:56.321   05:04:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:56.321  ************************************
00:13:56.321  START TEST bdev_bounds
00:13:56.321  ************************************
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=133182
00:13:56.321  Process bdevio pid: 133182
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 133182'
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 133182
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 133182 ']'
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:13:56.321  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:56.321   05:04:10 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:56.321  [2024-11-20 05:04:10.178237] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:56.321  [2024-11-20 05:04:10.178549] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133182 ]
00:13:56.579  [2024-11-20 05:04:10.352535] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:56.579  [2024-11-20 05:04:10.375894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:13:56.580  [2024-11-20 05:04:10.410843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:13:56.580  [2024-11-20 05:04:10.410932] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:13:56.580  [2024-11-20 05:04:10.410935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:57.516   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:57.516   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:13:57.517  I/O targets:
00:13:57.517    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:13:57.517  
00:13:57.517  
00:13:57.517       CUnit - A unit testing framework for C - Version 2.1-3
00:13:57.517       http://cunit.sourceforge.net/
00:13:57.517  
00:13:57.517  
00:13:57.517  Suite: bdevio tests on: Nvme0n1
00:13:57.517    Test: blockdev write read block ...passed
00:13:57.517    Test: blockdev write zeroes read block ...passed
00:13:57.517    Test: blockdev write zeroes read no split ...passed
00:13:57.517    Test: blockdev write zeroes read split ...passed
00:13:57.517    Test: blockdev write zeroes read split partial ...passed
00:13:57.517    Test: blockdev reset ...[2024-11-20 05:04:11.279573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:13:57.517  [2024-11-20 05:04:11.281736] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:13:57.517  passed
00:13:57.517    Test: blockdev write read 8 blocks ...passed
00:13:57.517    Test: blockdev write read size > 128k ...passed
00:13:57.517    Test: blockdev write read invalid size ...passed
00:13:57.517    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:13:57.517    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:13:57.517    Test: blockdev write read max offset ...passed
00:13:57.517    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:13:57.517    Test: blockdev writev readv 8 blocks ...passed
00:13:57.517    Test: blockdev writev readv 30 x 1block ...passed
00:13:57.517    Test: blockdev writev readv block ...passed
00:13:57.517    Test: blockdev writev readv size > 128k ...passed
00:13:57.517    Test: blockdev writev readv size > 128k in two iovs ...passed
00:13:57.517    Test: blockdev comparev and writev ...[2024-11-20 05:04:11.288172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x9c00d000 len:0x1000
00:13:57.517  [2024-11-20 05:04:11.288370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:13:57.517  passed
00:13:57.517    Test: blockdev nvme passthru rw ...passed
00:13:57.517    Test: blockdev nvme passthru vendor specific ...[2024-11-20 05:04:11.289344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:13:57.517  [2024-11-20 05:04:11.289533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:13:57.517  passed
00:13:57.517    Test: blockdev nvme admin passthru ...passed
00:13:57.517    Test: blockdev copy ...passed
00:13:57.517  
00:13:57.517  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:13:57.517                suites      1      1    n/a      0        0
00:13:57.517                 tests     23     23     23      0        0
00:13:57.517               asserts    152    152    152      0      n/a
00:13:57.517  
00:13:57.517  Elapsed time =    0.056 seconds
00:13:57.517  0
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 133182
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 133182 ']'
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 133182
00:13:57.517    05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:13:57.517    05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133182
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:13:57.517  killing process with pid 133182
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133182'
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 133182
00:13:57.517   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 133182
00:13:57.776   05:04:11 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:13:57.776  
00:13:57.776  real	0m1.410s
00:13:57.776  user	0m3.676s
00:13:57.776  sys	0m0.318s
00:13:57.776  ************************************
00:13:57.776  END TEST bdev_bounds
00:13:57.776  ************************************
00:13:57.776   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:13:57.776   05:04:11 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:13:57.776   05:04:11 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:13:57.776   05:04:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:13:57.776   05:04:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:13:57.776   05:04:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:13:57.776  ************************************
00:13:57.776  START TEST bdev_nbd
00:13:57.776  ************************************
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:13:57.776    05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1')
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0')
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1')
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=133238
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 133238 /var/tmp/spdk-nbd.sock
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 133238 ']'
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:13:57.776  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:13:57.776   05:04:11 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:13:57.776  [2024-11-20 05:04:11.655402] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:13:57.776  [2024-11-20 05:04:11.655684] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:13:58.035  [2024-11-20 05:04:11.807826] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:13:58.035  [2024-11-20 05:04:11.829256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:13:58.035  [2024-11-20 05:04:11.858688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock Nvme0n1
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1')
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock Nvme0n1
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1')
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:13:58.602   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:13:58.602    05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:13:59.169    05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:13:59.169  1+0 records in
00:13:59.169  1+0 records out
00:13:59.169  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000400099 s, 10.2 MB/s
00:13:59.169    05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:13:59.169   05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 ))
00:13:59.169    05:04:12 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:13:59.169    {
00:13:59.169      "nbd_device": "/dev/nbd0",
00:13:59.169      "bdev_name": "Nvme0n1"
00:13:59.169    }
00:13:59.169  ]'
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:13:59.169    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:13:59.169    {
00:13:59.169      "nbd_device": "/dev/nbd0",
00:13:59.169      "bdev_name": "Nvme0n1"
00:13:59.169    }
00:13:59.169  ]'
00:13:59.169    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:13:59.169   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:13:59.428    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:13:59.428   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:13:59.428    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:13:59.428    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:59.428     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:13:59.686    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:13:59.686     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:13:59.686     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:13:59.686    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:13:59.686     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:13:59.686     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:13:59.945     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:13:59.945    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:13:59.945    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1')
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0')
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock Nvme0n1 /dev/nbd0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1')
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0')
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0
00:13:59.945  /dev/nbd0
00:13:59.945    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:13:59.945   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:00.204  1+0 records in
00:14:00.204  1+0 records out
00:14:00.204  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054587 s, 7.5 MB/s
00:14:00.204    05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:00.204   05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 ))
00:14:00.204    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:14:00.204    05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:00.205     05:04:13 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:00.463    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:14:00.463    {
00:14:00.463      "nbd_device": "/dev/nbd0",
00:14:00.463      "bdev_name": "Nvme0n1"
00:14:00.463    }
00:14:00.463  ]'
00:14:00.463     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:14:00.463    {
00:14:00.463      "nbd_device": "/dev/nbd0",
00:14:00.463      "bdev_name": "Nvme0n1"
00:14:00.463    }
00:14:00.463  ]'
00:14:00.463     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:14:00.463    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0
00:14:00.463     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0
00:14:00.463     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:14:00.463    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1
00:14:00.464    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']'
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:14:00.464  256+0 records in
00:14:00.464  256+0 records out
00:14:00.464  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0081151 s, 129 MB/s
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:14:00.464  256+0 records in
00:14:00.464  256+0 records out
00:14:00.464  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0542887 s, 19.3 MB/s
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0')
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:00.464   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:14:00.723    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:00.723   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:00.723    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:14:00.723    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:00.723     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:00.982    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:14:00.982     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:14:00.982     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:14:01.240    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:14:01.240     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:14:01.240     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:14:01.240     05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:14:01.240    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:14:01.240    05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:01.240   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:14:01.241   05:04:14 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:14:01.241  malloc_lvol_verify
00:14:01.241   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:14:01.499  2a7c7369-a443-41ed-b88b-3361d46a80c8
00:14:01.499   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:14:01.758  b28d3762-0a7b-4cd8-9bc5-647ac5f10af5
00:14:01.758   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:14:02.016  /dev/nbd0
00:14:02.016   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:14:02.016   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:14:02.016   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:14:02.016   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:14:02.016   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:14:02.016  mke2fs 1.46.5 (30-Dec-2021)
00:14:02.016  
00:14:02.017  Filesystem too small for a journal
00:14:02.017  Discarding device blocks:    0/1024         done                            
00:14:02.017  Creating filesystem with 1024 4k blocks and 1024 inodes
00:14:02.017  
00:14:02.017  Allocating group tables: 0/1   done                            
00:14:02.017  Writing inode tables: 0/1   done                            
00:14:02.017  Writing superblocks and filesystem accounting information: 0/1   done
00:14:02.017  
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:02.017   05:04:15 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:14:02.275    05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 133238
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 133238 ']'
00:14:02.275   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 133238
00:14:02.275    05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:02.534    05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133238
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:02.534  killing process with pid 133238
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133238'
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 133238
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 133238
00:14:02.534   05:04:16 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:14:02.534  
00:14:02.535  real	0m4.888s
00:14:02.535  user	0m7.539s
00:14:02.535  sys	0m1.137s
00:14:02.535  ************************************
00:14:02.535  END TEST bdev_nbd
00:14:02.535  ************************************
00:14:02.535   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:02.535   05:04:16 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:14:02.794   05:04:16 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:14:02.794   05:04:16 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']'
00:14:02.794  skipping fio tests on NVMe due to multi-ns failures.
00:14:02.794   05:04:16 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:14:02.794   05:04:16 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:14:02.794   05:04:16 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:14:02.794   05:04:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:14:02.794   05:04:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:02.794   05:04:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:02.794  ************************************
00:14:02.794  START TEST bdev_verify
00:14:02.794  ************************************
00:14:02.794   05:04:16 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:14:02.794  [2024-11-20 05:04:16.597746] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:02.794  [2024-11-20 05:04:16.598074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133425 ]
00:14:03.053  [2024-11-20 05:04:16.759204] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:03.053  [2024-11-20 05:04:16.779215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:03.053  [2024-11-20 05:04:16.820558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:03.053  [2024-11-20 05:04:16.820566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:03.311  Running I/O for 5 seconds...
00:14:05.186      18112.00 IOPS,    70.75 MiB/s
[2024-11-20T05:04:20.111Z]     18848.00 IOPS,    73.62 MiB/s
[2024-11-20T05:04:21.487Z]     18624.00 IOPS,    72.75 MiB/s
[2024-11-20T05:04:22.423Z]     18704.00 IOPS,    73.06 MiB/s
[2024-11-20T05:04:22.423Z]     18854.40 IOPS,    73.65 MiB/s
00:14:08.466                                                                                                  Latency(us)
00:14:08.466  
[2024-11-20T05:04:22.423Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:08.466  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:14:08.467  	 Verification LBA range: start 0x0 length 0xa0000
00:14:08.467  	 Nvme0n1             :       5.01    9516.47      37.17       0.00     0.00   13380.20     867.61   19899.11
00:14:08.467  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:14:08.467  	 Verification LBA range: start 0xa0000 length 0xa0000
00:14:08.467  	 Nvme0n1             :       5.01    9295.10      36.31       0.00     0.00   13703.10     997.93   21448.15
00:14:08.467  
[2024-11-20T05:04:22.424Z]  ===================================================================================================================
00:14:08.467  
[2024-11-20T05:04:22.424Z]  Total                       :              18811.57      73.48       0.00     0.00   13539.79     867.61   21448.15
00:14:08.467  
00:14:08.467  real	0m5.821s
00:14:08.467  user	0m10.929s
00:14:08.467  sys	0m0.229s
00:14:08.467   05:04:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:08.467  ************************************
00:14:08.467  END TEST bdev_verify
00:14:08.467  ************************************
00:14:08.467   05:04:22 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:14:08.467   05:04:22 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:14:08.467   05:04:22 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:14:08.467   05:04:22 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:08.467   05:04:22 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:08.467  ************************************
00:14:08.467  START TEST bdev_verify_big_io
00:14:08.467  ************************************
00:14:08.467   05:04:22 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:14:08.726  [2024-11-20 05:04:22.466339] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:08.726  [2024-11-20 05:04:22.466620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133523 ]
00:14:08.726  [2024-11-20 05:04:22.625439] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:08.726  [2024-11-20 05:04:22.646204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:08.984  [2024-11-20 05:04:22.686632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:08.984  [2024-11-20 05:04:22.686638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:08.984  Running I/O for 5 seconds...
00:14:11.297       2360.00 IOPS,   147.50 MiB/s
[2024-11-20T05:04:26.188Z]      2396.00 IOPS,   149.75 MiB/s
[2024-11-20T05:04:27.124Z]      2458.33 IOPS,   153.65 MiB/s
[2024-11-20T05:04:28.060Z]      2468.00 IOPS,   154.25 MiB/s
00:14:14.103                                                                                                  Latency(us)
00:14:14.103  
[2024-11-20T05:04:28.060Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:14.103  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:14:14.103  	 Verification LBA range: start 0x0 length 0xa000
00:14:14.103  	 Nvme0n1             :       5.05    1115.62      69.73       0.00     0.00  112441.23     255.07  118679.74
00:14:14.103  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:14:14.103  	 Verification LBA range: start 0xa000 length 0xa000
00:14:14.103  	 Nvme0n1             :       5.04    1358.69      84.92       0.00     0.00   92599.18     372.36  123922.62
00:14:14.103  
[2024-11-20T05:04:28.060Z]  ===================================================================================================================
00:14:14.103  
[2024-11-20T05:04:28.060Z]  Total                       :               2474.31     154.64       0.00     0.00  101553.54     255.07  123922.62
00:14:15.037  
00:14:15.037  real	0m6.220s
00:14:15.037  user	0m11.730s
00:14:15.037  sys	0m0.246s
00:14:15.037   05:04:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:15.037  ************************************
00:14:15.037  END TEST bdev_verify_big_io
00:14:15.037  ************************************
00:14:15.037   05:04:28 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:14:15.037   05:04:28 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:15.037   05:04:28 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:15.037   05:04:28 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:15.037   05:04:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:15.037  ************************************
00:14:15.037  START TEST bdev_write_zeroes
00:14:15.037  ************************************
00:14:15.037   05:04:28 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:15.037  [2024-11-20 05:04:28.740747] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:15.037  [2024-11-20 05:04:28.741048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133622 ]
00:14:15.037  [2024-11-20 05:04:28.891846] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:15.037  [2024-11-20 05:04:28.919597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:15.037  [2024-11-20 05:04:28.960058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:15.294  Running I/O for 1 seconds...
00:14:16.227      72515.00 IOPS,   283.26 MiB/s
00:14:16.227                                                                                                  Latency(us)
00:14:16.227  
[2024-11-20T05:04:30.184Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:16.227  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:14:16.227  	 Nvme0n1             :       1.00   72398.87     282.81       0.00     0.00    1762.99     584.61   12809.31
00:14:16.227  
[2024-11-20T05:04:30.184Z]  ===================================================================================================================
00:14:16.227  
[2024-11-20T05:04:30.184Z]  Total                       :              72398.87     282.81       0.00     0.00    1762.99     584.61   12809.31
00:14:16.485  
00:14:16.485  real	0m1.706s
00:14:16.485  user	0m1.410s
00:14:16.485  sys	0m0.197s
00:14:16.485   05:04:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:16.485   05:04:30 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:14:16.485  ************************************
00:14:16.485  END TEST bdev_write_zeroes
00:14:16.485  ************************************
00:14:16.485   05:04:30 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:16.485   05:04:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:16.485   05:04:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:16.485   05:04:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:16.485  ************************************
00:14:16.485  START TEST bdev_json_nonenclosed
00:14:16.485  ************************************
00:14:16.485   05:04:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:16.744  [2024-11-20 05:04:30.497419] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:16.744  [2024-11-20 05:04:30.497701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133660 ]
00:14:16.744  [2024-11-20 05:04:30.646910] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:16.744  [2024-11-20 05:04:30.673475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:17.002  [2024-11-20 05:04:30.719375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:17.002  [2024-11-20 05:04:30.719537] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:14:17.002  [2024-11-20 05:04:30.719577] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:17.002  [2024-11-20 05:04:30.719607] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:17.002  
00:14:17.002  real	0m0.359s
00:14:17.002  user	0m0.133s
00:14:17.002  sys	0m0.125s
00:14:17.002   05:04:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:17.002  ************************************
00:14:17.002  END TEST bdev_json_nonenclosed
00:14:17.002  ************************************
00:14:17.002   05:04:30 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:14:17.002   05:04:30 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:17.002   05:04:30 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:17.002   05:04:30 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:17.002   05:04:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:17.002  ************************************
00:14:17.002  START TEST bdev_json_nonarray
00:14:17.002  ************************************
00:14:17.002   05:04:30 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:17.002  [2024-11-20 05:04:30.886916] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:17.002  [2024-11-20 05:04:30.887121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133700 ]
00:14:17.260  [2024-11-20 05:04:31.021162] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:17.260  [2024-11-20 05:04:31.046529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:17.260  [2024-11-20 05:04:31.081311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:17.260  [2024-11-20 05:04:31.081480] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:14:17.260  [2024-11-20 05:04:31.081520] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:17.260  [2024-11-20 05:04:31.081558] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:17.260  
00:14:17.260  real	0m0.318s
00:14:17.260  user	0m0.113s
00:14:17.260  sys	0m0.106s
00:14:17.260   05:04:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:17.260  ************************************
00:14:17.260  END TEST bdev_json_nonarray
00:14:17.260  ************************************
00:14:17.260   05:04:31 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:14:17.260   05:04:31 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]]
00:14:17.260   05:04:31 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]]
00:14:17.260   05:04:31 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]]
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:14:17.261   05:04:31 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:14:17.261  ************************************
00:14:17.261  END TEST blockdev_nvme
00:14:17.261  ************************************
00:14:17.261  
00:14:17.261  real	0m23.843s
00:14:17.261  user	0m38.273s
00:14:17.261  sys	0m3.290s
00:14:17.261   05:04:31 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:17.261   05:04:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:14:17.520    05:04:31  -- spdk/autotest.sh@209 -- # uname -s
00:14:17.520   05:04:31  -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]]
00:14:17.520   05:04:31  -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:14:17.520   05:04:31  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:17.520   05:04:31  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:17.520   05:04:31  -- common/autotest_common.sh@10 -- # set +x
00:14:17.520  ************************************
00:14:17.520  START TEST blockdev_nvme_gpt
00:14:17.520  ************************************
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt
00:14:17.520  * Looking for test storage...
00:14:17.520  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:17.520     05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:17.520     05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-:
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-:
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<'
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:17.520     05:04:31 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:17.520    05:04:31 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:17.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:17.520  		--rc genhtml_branch_coverage=1
00:14:17.520  		--rc genhtml_function_coverage=1
00:14:17.520  		--rc genhtml_legend=1
00:14:17.520  		--rc geninfo_all_blocks=1
00:14:17.520  		--rc geninfo_unexecuted_blocks=1
00:14:17.520  		
00:14:17.520  		'
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:17.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:17.520  		--rc genhtml_branch_coverage=1
00:14:17.520  		--rc genhtml_function_coverage=1
00:14:17.520  		--rc genhtml_legend=1
00:14:17.520  		--rc geninfo_all_blocks=1
00:14:17.520  		--rc geninfo_unexecuted_blocks=1
00:14:17.520  		
00:14:17.520  		'
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:17.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:17.520  		--rc genhtml_branch_coverage=1
00:14:17.520  		--rc genhtml_function_coverage=1
00:14:17.520  		--rc genhtml_legend=1
00:14:17.520  		--rc geninfo_all_blocks=1
00:14:17.520  		--rc geninfo_unexecuted_blocks=1
00:14:17.520  		
00:14:17.520  		'
00:14:17.520    05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:17.520  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:17.520  		--rc genhtml_branch_coverage=1
00:14:17.520  		--rc genhtml_function_coverage=1
00:14:17.520  		--rc genhtml_legend=1
00:14:17.520  		--rc geninfo_all_blocks=1
00:14:17.520  		--rc geninfo_unexecuted_blocks=1
00:14:17.520  		
00:14:17.520  		'
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:14:17.520    05:04:31 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # :
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5
00:14:17.520    05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']'
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device=
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek=
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx=
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc=
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']'
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]]
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]]
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=133777
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 133777
00:14:17.520   05:04:31 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 133777 ']'
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:17.520  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:17.520   05:04:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:17.779  [2024-11-20 05:04:31.494968] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:17.779  [2024-11-20 05:04:31.495202] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid133777 ]
00:14:17.779  [2024-11-20 05:04:31.629708] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:17.779  [2024-11-20 05:04:31.654696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:17.779  [2024-11-20 05:04:31.684144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:18.716   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:18.716   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0
00:14:18.716   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in
00:14:18.716   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf
00:14:18.716   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:18.975  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:14:18.975  Waiting for block devices as requested
00:14:18.975  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme*
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]]
00:14:18.975   05:04:32 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]]
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1')
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme=
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}"
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]]
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1
00:14:18.975    05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label
00:14:18.975  BYT;
00:14:18.975  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;'
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label
00:14:18.975  BYT;
00:14:18.975  /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]]
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]]
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df
00:14:18.975   05:04:32 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100%
00:14:19.543    05:04:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()'
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _
00:14:19.543     05:04:33 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:14:19.543   05:04:33 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c
00:14:19.543    05:04:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]]
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()'
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _
00:14:19.543     05:04:33 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b
00:14:19.543    05:04:33 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b
00:14:19.543   05:04:33 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b
00:14:19.543   05:04:33 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1
00:14:20.479  The operation has completed successfully.
00:14:20.479   05:04:34 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1
00:14:21.415  The operation has completed successfully.
00:14:21.415   05:04:35 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:21.674  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:14:21.933  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:14:22.869   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs
00:14:22.869   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:22.869   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:22.869  []
00:14:22.869   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:22.869   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf
00:14:22.869   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json
00:14:22.869   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json
00:14:22.869    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:23.129   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129   05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat
00:14:23.129    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129   05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs
00:14:23.129    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.129    05:04:36 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)'
00:14:23.129    05:04:36 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:23.129   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name
00:14:23.129    05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name
00:14:23.129    05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' '  "name": "Nvme0n1p1",' '  "aliases": [' '    "6f89f330-603b-4116-ac73-2ca8eae53030"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655104,' '  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 256,' '      "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' '      "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' '      "partition_name": "SPDK_TEST_first"' '    }' '  }' '}' '{' '  "name": "Nvme0n1p2",' '  "aliases": [' '    "abf1734f-66e5-4c0f-aa29-4021d4d307df"' '  ],' '  "product_name": "GPT Disk",' '  "block_size": 4096,' '  "num_blocks": 655103,' '  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "gpt": {' '      "base_bdev": "Nvme0n1",' '      "offset_blocks": 655360,' '      "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' '      "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' '      "partition_name": "SPDK_TEST_second"' '    }' '  }' '}'
00:14:23.129   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}")
00:14:23.129   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1p1
00:14:23.129   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT
00:14:23.129   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 133777
00:14:23.129   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 133777 ']'
00:14:23.129   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 133777
00:14:23.129    05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname
00:14:23.129   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:23.129    05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 133777
00:14:23.388   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:23.388  killing process with pid 133777
00:14:23.388   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:23.388   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 133777'
00:14:23.388   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 133777
00:14:23.388   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 133777
00:14:23.647   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT
00:14:23.647   05:04:37 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:14:23.647   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:14:23.647   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:23.647   05:04:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:23.647  ************************************
00:14:23.647  START TEST bdev_hello_world
00:14:23.647  ************************************
00:14:23.647   05:04:37 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 ''
00:14:23.647  [2024-11-20 05:04:37.555353] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:23.647  [2024-11-20 05:04:37.555645] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134203 ]
00:14:23.906  [2024-11-20 05:04:37.706260] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:23.906  [2024-11-20 05:04:37.730194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:23.906  [2024-11-20 05:04:37.768520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:24.165  [2024-11-20 05:04:37.980052] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:14:24.165  [2024-11-20 05:04:37.980125] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1
00:14:24.165  [2024-11-20 05:04:37.980174] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:14:24.165  [2024-11-20 05:04:37.982279] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:14:24.165  [2024-11-20 05:04:37.982844] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:14:24.165  [2024-11-20 05:04:37.982909] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:14:24.165  [2024-11-20 05:04:37.983174] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:14:24.165  
00:14:24.165  [2024-11-20 05:04:37.983230] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:14:24.424  
00:14:24.424  real	0m0.699s
00:14:24.424  user	0m0.415s
00:14:24.424  sys	0m0.185s
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:24.424  ************************************
00:14:24.424  END TEST bdev_hello_world
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:14:24.424  ************************************
00:14:24.424   05:04:38 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds ''
00:14:24.424   05:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:14:24.424   05:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:24.424   05:04:38 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:24.424  ************************************
00:14:24.424  START TEST bdev_bounds
00:14:24.424  ************************************
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=134234
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:14:24.424  Process bdevio pid: 134234
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 134234'
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 134234
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 134234 ']'
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:24.424  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:24.424   05:04:38 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:14:24.424  [2024-11-20 05:04:38.301274] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:24.424  [2024-11-20 05:04:38.301516] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134234 ]
00:14:24.683  [2024-11-20 05:04:38.452924] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:24.683  [2024-11-20 05:04:38.472453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:24.683  [2024-11-20 05:04:38.509921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:24.683  [2024-11-20 05:04:38.510054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:24.683  [2024-11-20 05:04:38.510054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:25.619   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:25.619   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:14:25.619   05:04:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:14:25.619  I/O targets:
00:14:25.619    Nvme0n1p1: 655104 blocks of 4096 bytes (2559 MiB)
00:14:25.619    Nvme0n1p2: 655103 blocks of 4096 bytes (2559 MiB)
00:14:25.619  
00:14:25.619  
00:14:25.619       CUnit - A unit testing framework for C - Version 2.1-3
00:14:25.619       http://cunit.sourceforge.net/
00:14:25.619  
00:14:25.619  
00:14:25.619  Suite: bdevio tests on: Nvme0n1p2
00:14:25.619    Test: blockdev write read block ...passed
00:14:25.619    Test: blockdev write zeroes read block ...passed
00:14:25.619    Test: blockdev write zeroes read no split ...passed
00:14:25.619    Test: blockdev write zeroes read split ...passed
00:14:25.619    Test: blockdev write zeroes read split partial ...passed
00:14:25.619    Test: blockdev reset ...[2024-11-20 05:04:39.402627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:14:25.619  [2024-11-20 05:04:39.406420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:14:25.619  passed
00:14:25.619    Test: blockdev write read 8 blocks ...passed
00:14:25.619    Test: blockdev write read size > 128k ...passed
00:14:25.619    Test: blockdev write read invalid size ...passed
00:14:25.619    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:14:25.619    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:14:25.619    Test: blockdev write read max offset ...passed
00:14:25.619    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:14:25.619    Test: blockdev writev readv 8 blocks ...passed
00:14:25.619    Test: blockdev writev readv 30 x 1block ...passed
00:14:25.619    Test: blockdev writev readv block ...passed
00:14:25.619    Test: blockdev writev readv size > 128k ...passed
00:14:25.619    Test: blockdev writev readv size > 128k in two iovs ...passed
00:14:25.619    Test: blockdev comparev and writev ...[2024-11-20 05:04:39.413147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x9b20d000 len:0x1000
00:14:25.619  [2024-11-20 05:04:39.413326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:14:25.619  passed
00:14:25.619    Test: blockdev nvme passthru rw ...passed
00:14:25.619    Test: blockdev nvme passthru vendor specific ...passed
00:14:25.619    Test: blockdev nvme admin passthru ...passed
00:14:25.619    Test: blockdev copy ...passed
00:14:25.619  Suite: bdevio tests on: Nvme0n1p1
00:14:25.619    Test: blockdev write read block ...passed
00:14:25.619    Test: blockdev write zeroes read block ...passed
00:14:25.619    Test: blockdev write zeroes read no split ...passed
00:14:25.619    Test: blockdev write zeroes read split ...passed
00:14:25.620    Test: blockdev write zeroes read split partial ...passed
00:14:25.620    Test: blockdev reset ...[2024-11-20 05:04:39.426843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:14:25.620  [2024-11-20 05:04:39.428596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:14:25.620  passed
00:14:25.620    Test: blockdev write read 8 blocks ...passed
00:14:25.620    Test: blockdev write read size > 128k ...passed
00:14:25.620    Test: blockdev write read invalid size ...passed
00:14:25.620    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:14:25.620    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:14:25.620    Test: blockdev write read max offset ...passed
00:14:25.620    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:14:25.620    Test: blockdev writev readv 8 blocks ...passed
00:14:25.620    Test: blockdev writev readv 30 x 1block ...passed
00:14:25.620    Test: blockdev writev readv block ...passed
00:14:25.620    Test: blockdev writev readv size > 128k ...passed
00:14:25.620    Test: blockdev writev readv size > 128k in two iovs ...passed
00:14:25.620    Test: blockdev comparev and writev ...[2024-11-20 05:04:39.435408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x9b209000 len:0x1000
00:14:25.620  [2024-11-20 05:04:39.435579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:14:25.620  passed
00:14:25.620    Test: blockdev nvme passthru rw ...passed
00:14:25.620    Test: blockdev nvme passthru vendor specific ...passed
00:14:25.620    Test: blockdev nvme admin passthru ...passed
00:14:25.620    Test: blockdev copy ...passed
00:14:25.620  
00:14:25.620  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:14:25.620                suites      2      2    n/a      0        0
00:14:25.620                 tests     46     46     46      0        0
00:14:25.620               asserts    284    284    284      0      n/a
00:14:25.620  
00:14:25.620  Elapsed time =    0.117 seconds
00:14:25.620  0
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 134234
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 134234 ']'
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 134234
00:14:25.620    05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:25.620    05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134234
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:25.620  killing process with pid 134234
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134234'
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 134234
00:14:25.620   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 134234
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:14:25.879  
00:14:25.879  real	0m1.413s
00:14:25.879  user	0m3.751s
00:14:25.879  sys	0m0.311s
00:14:25.879  ************************************
00:14:25.879  END TEST bdev_bounds
00:14:25.879  ************************************
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:14:25.879   05:04:39 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:14:25.879   05:04:39 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:14:25.879   05:04:39 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:25.879   05:04:39 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:25.879  ************************************
00:14:25.879  START TEST bdev_nbd
00:14:25.879  ************************************
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2' ''
00:14:25.879    05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]]
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2')
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=2
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]]
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9')
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=2
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=134296
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 134296 /var/tmp/spdk-nbd.sock
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 134296 ']'
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:25.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...'
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:25.879   05:04:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:14:25.879  [2024-11-20 05:04:39.791272] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:25.879  [2024-11-20 05:04:39.791585] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ]
00:14:26.138  [2024-11-20 05:04:39.946870] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:26.138  [2024-11-20 05:04:39.965929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:26.138  [2024-11-20 05:04:40.002980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2'
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 ))
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:14:27.074    05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0
00:14:27.074    05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:14:27.074   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:27.075  1+0 records in
00:14:27.075  1+0 records out
00:14:27.075  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780562 s, 5.2 MB/s
00:14:27.075    05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:14:27.075   05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:14:27.075    05:04:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1
00:14:27.333    05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:14:27.333   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:27.334  1+0 records in
00:14:27.334  1+0 records out
00:14:27.334  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000703437 s, 5.8 MB/s
00:14:27.334    05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ ))
00:14:27.334   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 2 ))
00:14:27.592    05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:27.592   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[
00:14:27.592    {
00:14:27.592      "nbd_device": "/dev/nbd0",
00:14:27.592      "bdev_name": "Nvme0n1p1"
00:14:27.592    },
00:14:27.592    {
00:14:27.592      "nbd_device": "/dev/nbd1",
00:14:27.592      "bdev_name": "Nvme0n1p2"
00:14:27.592    }
00:14:27.592  ]'
00:14:27.592   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device'))
00:14:27.592    05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[
00:14:27.592    {
00:14:27.592      "nbd_device": "/dev/nbd0",
00:14:27.592      "bdev_name": "Nvme0n1p1"
00:14:27.592    },
00:14:27.592    {
00:14:27.592      "nbd_device": "/dev/nbd1",
00:14:27.592      "bdev_name": "Nvme0n1p2"
00:14:27.592    }
00:14:27.592  ]'
00:14:27.592    05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device'
00:14:27.592   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:14:27.592   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:27.592   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:27.593   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:27.593   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:14:27.593   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:27.593   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:14:27.851    05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:27.851   05:04:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:14:28.110    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:28.110   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:28.110    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:14:28.110    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:28.110     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:28.678    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:14:28.678     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:14:28.678     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:14:28.678    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:14:28.678     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:14:28.678     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:14:28.678     05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:14:28.678    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:14:28.678    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']'
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2' '/dev/nbd0 /dev/nbd1'
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2')
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 ))
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:28.678   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0
00:14:28.678  /dev/nbd0
00:14:28.937    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:28.937  1+0 records in
00:14:28.937  1+0 records out
00:14:28.937  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067511 s, 6.1 MB/s
00:14:28.937    05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:28.937   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1
00:14:29.196  /dev/nbd1
00:14:29.196    05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 ))
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 ))
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 ))
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 ))
00:14:29.196   05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct
00:14:29.196  1+0 records in
00:14:29.196  1+0 records out
00:14:29.196  4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053057 s, 7.7 MB/s
00:14:29.196    05:04:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']'
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ ))
00:14:29.196   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 2 ))
00:14:29.196    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:14:29.196    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:29.196     05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:29.455    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[
00:14:29.455    {
00:14:29.455      "nbd_device": "/dev/nbd0",
00:14:29.455      "bdev_name": "Nvme0n1p1"
00:14:29.455    },
00:14:29.455    {
00:14:29.455      "nbd_device": "/dev/nbd1",
00:14:29.455      "bdev_name": "Nvme0n1p2"
00:14:29.455    }
00:14:29.455  ]'
00:14:29.455     05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[
00:14:29.455    {
00:14:29.455      "nbd_device": "/dev/nbd0",
00:14:29.455      "bdev_name": "Nvme0n1p1"
00:14:29.455    },
00:14:29.455    {
00:14:29.455      "nbd_device": "/dev/nbd1",
00:14:29.455      "bdev_name": "Nvme0n1p2"
00:14:29.455    }
00:14:29.455  ]'
00:14:29.455     05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:14:29.455    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0
00:14:29.455  /dev/nbd1'
00:14:29.455     05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0
00:14:29.455  /dev/nbd1'
00:14:29.455     05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:14:29.455    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=2
00:14:29.455    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 2
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=2
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']'
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']'
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256
00:14:29.455  256+0 records in
00:14:29.455  256+0 records out
00:14:29.455  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00940863 s, 111 MB/s
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:14:29.455   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct
00:14:29.714  256+0 records in
00:14:29.714  256+0 records out
00:14:29.714  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0753361 s, 13.9 MB/s
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}"
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct
00:14:29.714  256+0 records in
00:14:29.714  256+0 records out
00:14:29.714  1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0891257 s, 11.8 MB/s
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']'
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']'
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}"
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1'
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1')
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:29.714   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:14:29.973    05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:29.973   05:04:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1
00:14:30.231    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:30.231   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:30.231    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock
00:14:30.231    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:30.231     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks
00:14:30.490    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]'
00:14:30.490     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device'
00:14:30.490     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]'
00:14:30.490    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=
00:14:30.490     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo ''
00:14:30.490     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd
00:14:30.490     05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true
00:14:30.490    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0
00:14:30.490    05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']'
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0
00:14:30.490   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512
00:14:30.748  malloc_lvol_verify
00:14:30.748   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs
00:14:31.006  7634e5be-6be1-43be-9003-1c7fbb4e7fd5
00:14:31.265   05:04:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs
00:14:31.265  60d0bd4c-00f2-4981-a9eb-947be933abf5
00:14:31.265   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0
00:14:31.832  /dev/nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]]
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 ))
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0
00:14:31.832  mke2fs 1.46.5 (30-Dec-2021)
00:14:31.832  
00:14:31.832  Filesystem too small for a journal
00:14:31.832  Discarding device blocks:    0/1024         done                            
00:14:31.832  Creating filesystem with 1024 4k blocks and 1024 inodes
00:14:31.832  
00:14:31.832  Allocating group tables: 0/1   done                            
00:14:31.832  Writing inode tables: 0/1   done                            
00:14:31.832  Writing superblocks and filesystem accounting information: 0/1   done
00:14:31.832  
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0')
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}"
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0
00:14:31.832    05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 ))
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 ))
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 134296
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 134296 ']'
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 134296
00:14:31.832    05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:31.832    05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134296
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:31.832  killing process with pid 134296
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134296'
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 134296
00:14:31.832   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 134296
00:14:32.091   05:04:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT
00:14:32.091  
00:14:32.091  real	0m6.254s
00:14:32.091  user	0m9.536s
00:14:32.091  sys	0m1.654s
00:14:32.091   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:32.091   05:04:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:14:32.091  ************************************
00:14:32.091  END TEST bdev_nbd
00:14:32.091  ************************************
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]]
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']'
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']'
00:14:32.091  skipping fio tests on NVMe due to multi-ns failures.
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT
00:14:32.091   05:04:46 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:14:32.091   05:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:14:32.091   05:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:32.091   05:04:46 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:32.091  ************************************
00:14:32.091  START TEST bdev_verify
00:14:32.091  ************************************
00:14:32.091   05:04:46 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:14:32.350  [2024-11-20 05:04:46.084475] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:32.350  [2024-11-20 05:04:46.084681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134543 ]
00:14:32.350  [2024-11-20 05:04:46.229188] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:32.350  [2024-11-20 05:04:46.249053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:32.350  [2024-11-20 05:04:46.286365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:32.350  [2024-11-20 05:04:46.286374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:32.609  Running I/O for 5 seconds...
00:14:34.922      16000.00 IOPS,    62.50 MiB/s
[2024-11-20T05:04:49.829Z]     16480.00 IOPS,    64.38 MiB/s
[2024-11-20T05:04:50.837Z]     16341.33 IOPS,    63.83 MiB/s
[2024-11-20T05:04:51.772Z]     16336.00 IOPS,    63.81 MiB/s
[2024-11-20T05:04:51.772Z]     16448.00 IOPS,    64.25 MiB/s
00:14:37.815                                                                                                  Latency(us)
00:14:37.815  
[2024-11-20T05:04:51.772Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:37.815  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:14:37.815  	 Verification LBA range: start 0x0 length 0x4ff80
00:14:37.815  	 Nvme0n1p1           :       5.03    4095.32      16.00       0.00     0.00   31175.31    4915.20   25380.31
00:14:37.815  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:14:37.815  	 Verification LBA range: start 0x4ff80 length 0x4ff80
00:14:37.815  	 Nvme0n1p1           :       5.03    4098.37      16.01       0.00     0.00   31061.66    2055.45   25737.77
00:14:37.815  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:14:37.815  	 Verification LBA range: start 0x0 length 0x4ff7f
00:14:37.815  	 Nvme0n1p2           :       5.03    4094.33      15.99       0.00     0.00   31118.75    2398.02   26691.03
00:14:37.815  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:14:37.815  	 Verification LBA range: start 0x4ff7f length 0x4ff7f
00:14:37.816  	 Nvme0n1p2           :       5.01    4086.15      15.96       0.00     0.00   31219.46    7030.23   34317.03
00:14:37.816  
[2024-11-20T05:04:51.773Z]  ===================================================================================================================
00:14:37.816  
[2024-11-20T05:04:51.773Z]  Total                       :              16374.17      63.96       0.00     0.00   31143.68    2055.45   34317.03
00:14:38.075  
00:14:38.075  real	0m5.746s
00:14:38.075  user	0m10.867s
00:14:38.075  sys	0m0.176s
00:14:38.075   05:04:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:38.075   05:04:51 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:14:38.075  ************************************
00:14:38.075  END TEST bdev_verify
00:14:38.075  ************************************
00:14:38.075   05:04:51 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:14:38.075   05:04:51 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:14:38.075   05:04:51 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:38.075   05:04:51 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:38.075  ************************************
00:14:38.075  START TEST bdev_verify_big_io
00:14:38.075  ************************************
00:14:38.075   05:04:51 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:14:38.075  [2024-11-20 05:04:51.882084] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:38.075  [2024-11-20 05:04:51.882305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134638 ]
00:14:38.075  [2024-11-20 05:04:52.026146] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:38.333  [2024-11-20 05:04:52.046994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:14:38.333  [2024-11-20 05:04:52.083781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:38.333  [2024-11-20 05:04:52.083787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:38.592  Running I/O for 5 seconds...
00:14:40.908       1996.00 IOPS,   124.75 MiB/s
[2024-11-20T05:04:55.802Z]      2304.00 IOPS,   144.00 MiB/s
[2024-11-20T05:04:56.738Z]      2340.00 IOPS,   146.25 MiB/s
[2024-11-20T05:04:57.675Z]      2376.00 IOPS,   148.50 MiB/s
[2024-11-20T05:04:57.675Z]      2460.80 IOPS,   153.80 MiB/s
00:14:43.718                                                                                                  Latency(us)
00:14:43.718  
[2024-11-20T05:04:57.675Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:43.718  Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:14:43.718  	 Verification LBA range: start 0x0 length 0x4ff8
00:14:43.718  	 Nvme0n1p1           :       5.15     571.22      35.70       0.00     0.00  220461.99    6017.40  237359.48
00:14:43.718  Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:14:43.718  	 Verification LBA range: start 0x4ff8 length 0x4ff8
00:14:43.718  	 Nvme0n1p1           :       5.10     652.29      40.77       0.00     0.00  193132.04    5272.67  199229.44
00:14:43.718  Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:14:43.718  	 Verification LBA range: start 0x0 length 0x4ff7
00:14:43.718  	 Nvme0n1p2           :       5.16     563.27      35.20       0.00     0.00  218056.00    3112.96  209715.20
00:14:43.718  Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:14:43.718  	 Verification LBA range: start 0x4ff7 length 0x4ff7
00:14:43.718  	 Nvme0n1p2           :       5.13     660.02      41.25       0.00     0.00  187518.28    1027.72  200182.69
00:14:43.718  
[2024-11-20T05:04:57.675Z]  ===================================================================================================================
00:14:43.718  
[2024-11-20T05:04:57.675Z]  Total                       :               2446.80     152.92       0.00     0.00  203788.22    1027.72  237359.48
00:14:44.285  
00:14:44.285  real	0m6.319s
00:14:44.285  user	0m11.970s
00:14:44.285  sys	0m0.216s
00:14:44.285   05:04:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:44.286   05:04:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:14:44.286  ************************************
00:14:44.286  END TEST bdev_verify_big_io
00:14:44.286  ************************************
00:14:44.286   05:04:58 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:44.286   05:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:44.286   05:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:44.286   05:04:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:44.286  ************************************
00:14:44.286  START TEST bdev_write_zeroes
00:14:44.286  ************************************
00:14:44.286   05:04:58 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:44.544  [2024-11-20 05:04:58.272564] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:44.544  [2024-11-20 05:04:58.272843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134738 ]
00:14:44.544  [2024-11-20 05:04:58.422725] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:44.544  [2024-11-20 05:04:58.442269] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:44.544  [2024-11-20 05:04:58.482348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:44.802  Running I/O for 1 seconds...
00:14:46.176      55426.00 IOPS,   216.51 MiB/s
00:14:46.176                                                                                                  Latency(us)
00:14:46.176  
[2024-11-20T05:05:00.133Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:14:46.176  Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:14:46.176  	 Nvme0n1p1           :       1.01   27704.12     108.22       0.00     0.00    4610.60    2219.29   13285.93
00:14:46.176  Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:14:46.176  	 Nvme0n1p2           :       1.01   27669.47     108.08       0.00     0.00    4609.97    2591.65    9353.77
00:14:46.176  
[2024-11-20T05:05:00.133Z]  ===================================================================================================================
00:14:46.176  
[2024-11-20T05:05:00.133Z]  Total                       :              55373.59     216.30       0.00     0.00    4610.29    2219.29   13285.93
00:14:46.176  
00:14:46.176  real	0m1.703s
00:14:46.176  user	0m1.400s
00:14:46.176  sys	0m0.204s
00:14:46.176   05:04:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:46.176   05:04:59 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:14:46.176  ************************************
00:14:46.176  END TEST bdev_write_zeroes
00:14:46.176  ************************************
00:14:46.176   05:04:59 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:46.176   05:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:46.176   05:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:46.176   05:04:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:46.176  ************************************
00:14:46.176  START TEST bdev_json_nonenclosed
00:14:46.176  ************************************
00:14:46.176   05:04:59 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:46.176  [2024-11-20 05:05:00.038217] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:46.176  [2024-11-20 05:05:00.038542] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134776 ]
00:14:46.435  [2024-11-20 05:05:00.191707] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:46.435  [2024-11-20 05:05:00.217102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:46.435  [2024-11-20 05:05:00.251846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:46.435  [2024-11-20 05:05:00.251963] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:14:46.435  [2024-11-20 05:05:00.251997] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:46.435  [2024-11-20 05:05:00.252029] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:46.435  
00:14:46.435  real	0m0.353s
00:14:46.435  user	0m0.141s
00:14:46.435  sys	0m0.112s
00:14:46.435   05:05:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:46.435   05:05:00 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:14:46.435  ************************************
00:14:46.435  END TEST bdev_json_nonenclosed
00:14:46.435  ************************************
00:14:46.435   05:05:00 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:46.435   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:14:46.435   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:46.435   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:46.435  ************************************
00:14:46.435  START TEST bdev_json_nonarray
00:14:46.435  ************************************
00:14:46.435   05:05:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:14:46.694  [2024-11-20 05:05:00.441849] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:46.694  [2024-11-20 05:05:00.442165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134814 ]
00:14:46.694  [2024-11-20 05:05:00.594945] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:46.694  [2024-11-20 05:05:00.618300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:46.952  [2024-11-20 05:05:00.658032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:46.952  [2024-11-20 05:05:00.658191] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:14:46.952  [2024-11-20 05:05:00.658239] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:14:46.952  [2024-11-20 05:05:00.658277] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:14:46.952  
00:14:46.952  real	0m0.362s
00:14:46.952  user	0m0.138s
00:14:46.953  sys	0m0.125s
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:14:46.953  ************************************
00:14:46.953  END TEST bdev_json_nonarray
00:14:46.953  ************************************
00:14:46.953   05:05:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]]
00:14:46.953   05:05:00 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]]
00:14:46.953   05:05:00 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid
00:14:46.953   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:46.953   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:46.953   05:05:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:46.953  ************************************
00:14:46.953  START TEST bdev_gpt_uuid
00:14:46.953  ************************************
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=134836
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 134836
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 134836 ']'
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:14:46.953  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable
00:14:46.953   05:05:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:46.953  [2024-11-20 05:05:00.875939] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:46.953  [2024-11-20 05:05:00.876434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134836 ]
00:14:47.211  [2024-11-20 05:05:01.026446] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:47.211  [2024-11-20 05:05:01.048495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:14:47.211  [2024-11-20 05:05:01.080523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:48.148  Some configs were skipped because the RPC state that can call them passed over.
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[
00:14:48.148  {
00:14:48.148  "name": "Nvme0n1p1",
00:14:48.148  "aliases": [
00:14:48.148  "6f89f330-603b-4116-ac73-2ca8eae53030"
00:14:48.148  ],
00:14:48.148  "product_name": "GPT Disk",
00:14:48.148  "block_size": 4096,
00:14:48.148  "num_blocks": 655104,
00:14:48.148  "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:14:48.148  "assigned_rate_limits": {
00:14:48.148  "rw_ios_per_sec": 0,
00:14:48.148  "rw_mbytes_per_sec": 0,
00:14:48.148  "r_mbytes_per_sec": 0,
00:14:48.148  "w_mbytes_per_sec": 0
00:14:48.148  },
00:14:48.148  "claimed": false,
00:14:48.148  "zoned": false,
00:14:48.148  "supported_io_types": {
00:14:48.148  "read": true,
00:14:48.148  "write": true,
00:14:48.148  "unmap": true,
00:14:48.148  "flush": true,
00:14:48.148  "reset": true,
00:14:48.148  "nvme_admin": false,
00:14:48.148  "nvme_io": false,
00:14:48.148  "nvme_io_md": false,
00:14:48.148  "write_zeroes": true,
00:14:48.148  "zcopy": false,
00:14:48.148  "get_zone_info": false,
00:14:48.148  "zone_management": false,
00:14:48.148  "zone_append": false,
00:14:48.148  "compare": true,
00:14:48.148  "compare_and_write": false,
00:14:48.148  "abort": true,
00:14:48.148  "seek_hole": false,
00:14:48.148  "seek_data": false,
00:14:48.148  "copy": true,
00:14:48.148  "nvme_iov_md": false
00:14:48.148  },
00:14:48.148  "driver_specific": {
00:14:48.148  "gpt": {
00:14:48.148  "base_bdev": "Nvme0n1",
00:14:48.148  "offset_blocks": 256,
00:14:48.148  "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",
00:14:48.148  "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",
00:14:48.148  "partition_name": "SPDK_TEST_first"
00:14:48.148  }
00:14:48.148  }
00:14:48.148  }
00:14:48.148  ]'
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]]
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]'
00:14:48.148   05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:14:48.148    05:05:01 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:14:48.148   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]]
00:14:48.148    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df
00:14:48.148    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable
00:14:48.148    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:48.148    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:14:48.148   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[
00:14:48.148  {
00:14:48.148  "name": "Nvme0n1p2",
00:14:48.148  "aliases": [
00:14:48.148  "abf1734f-66e5-4c0f-aa29-4021d4d307df"
00:14:48.148  ],
00:14:48.148  "product_name": "GPT Disk",
00:14:48.148  "block_size": 4096,
00:14:48.148  "num_blocks": 655103,
00:14:48.148  "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:14:48.148  "assigned_rate_limits": {
00:14:48.148  "rw_ios_per_sec": 0,
00:14:48.148  "rw_mbytes_per_sec": 0,
00:14:48.148  "r_mbytes_per_sec": 0,
00:14:48.148  "w_mbytes_per_sec": 0
00:14:48.148  },
00:14:48.148  "claimed": false,
00:14:48.148  "zoned": false,
00:14:48.148  "supported_io_types": {
00:14:48.148  "read": true,
00:14:48.148  "write": true,
00:14:48.148  "unmap": true,
00:14:48.148  "flush": true,
00:14:48.148  "reset": true,
00:14:48.148  "nvme_admin": false,
00:14:48.148  "nvme_io": false,
00:14:48.148  "nvme_io_md": false,
00:14:48.148  "write_zeroes": true,
00:14:48.148  "zcopy": false,
00:14:48.148  "get_zone_info": false,
00:14:48.148  "zone_management": false,
00:14:48.148  "zone_append": false,
00:14:48.148  "compare": true,
00:14:48.149  "compare_and_write": false,
00:14:48.149  "abort": true,
00:14:48.149  "seek_hole": false,
00:14:48.149  "seek_data": false,
00:14:48.149  "copy": true,
00:14:48.149  "nvme_iov_md": false
00:14:48.149  },
00:14:48.149  "driver_specific": {
00:14:48.149  "gpt": {
00:14:48.149  "base_bdev": "Nvme0n1",
00:14:48.149  "offset_blocks": 655360,
00:14:48.149  "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",
00:14:48.149  "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",
00:14:48.149  "partition_name": "SPDK_TEST_second"
00:14:48.149  }
00:14:48.149  }
00:14:48.149  }
00:14:48.149  ]'
00:14:48.149    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length
00:14:48.149   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]]
00:14:48.149    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]'
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:14:48.408    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid'
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]]
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 134836
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 134836 ']'
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 134836
00:14:48.408    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:14:48.408    05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134836
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:14:48.408  killing process with pid 134836
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134836'
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 134836
00:14:48.408   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 134836
00:14:48.975  
00:14:48.975  real	0m1.980s
00:14:48.975  user	0m2.208s
00:14:48.975  sys	0m0.425s
00:14:48.975   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:48.975   05:05:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x
00:14:48.975  ************************************
00:14:48.975  END TEST bdev_gpt_uuid
00:14:48.975  ************************************
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]]
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]]
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]]
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]]
00:14:48.975   05:05:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:14:49.234  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:14:49.234  Waiting for block devices as requested
00:14:49.493  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:14:49.493   05:05:03 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]]
00:14:49.493   05:05:03 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1
00:14:49.493  /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54
00:14:49.493  /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54
00:14:49.493  /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
00:14:49.493  /dev/nvme0n1: calling ioctl to re-read partition table: Success
00:14:49.493   05:05:03 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]]
00:14:49.493  
00:14:49.493  real	0m32.067s
00:14:49.493  user	0m47.710s
00:14:49.493  sys	0m5.961s
00:14:49.493   05:05:03 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:49.493   05:05:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x
00:14:49.493  ************************************
00:14:49.493  END TEST blockdev_nvme_gpt
00:14:49.493  ************************************
00:14:49.493   05:05:03  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:14:49.493   05:05:03  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:49.493   05:05:03  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:49.493   05:05:03  -- common/autotest_common.sh@10 -- # set +x
00:14:49.493  ************************************
00:14:49.493  START TEST nvme
00:14:49.493  ************************************
00:14:49.493   05:05:03 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:14:49.752  * Looking for test storage...
00:14:49.752  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:14:49.752     05:05:03 nvme -- common/autotest_common.sh@1693 -- # lcov --version
00:14:49.752     05:05:03 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:14:49.752    05:05:03 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:14:49.752    05:05:03 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:14:49.752    05:05:03 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:14:49.752    05:05:03 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:14:49.752    05:05:03 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:14:49.752    05:05:03 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:14:49.752    05:05:03 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:14:49.752    05:05:03 nvme -- scripts/common.sh@344 -- # case "$op" in
00:14:49.752    05:05:03 nvme -- scripts/common.sh@345 -- # : 1
00:14:49.752    05:05:03 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:14:49.752    05:05:03 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:14:49.752     05:05:03 nvme -- scripts/common.sh@365 -- # decimal 1
00:14:49.752     05:05:03 nvme -- scripts/common.sh@353 -- # local d=1
00:14:49.752     05:05:03 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:14:49.752     05:05:03 nvme -- scripts/common.sh@355 -- # echo 1
00:14:49.752    05:05:03 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:14:49.752     05:05:03 nvme -- scripts/common.sh@366 -- # decimal 2
00:14:49.752     05:05:03 nvme -- scripts/common.sh@353 -- # local d=2
00:14:49.752     05:05:03 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:14:49.752     05:05:03 nvme -- scripts/common.sh@355 -- # echo 2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:14:49.752    05:05:03 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:14:49.752    05:05:03 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:14:49.752    05:05:03 nvme -- scripts/common.sh@368 -- # return 0
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:14:49.752  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.752  		--rc genhtml_branch_coverage=1
00:14:49.752  		--rc genhtml_function_coverage=1
00:14:49.752  		--rc genhtml_legend=1
00:14:49.752  		--rc geninfo_all_blocks=1
00:14:49.752  		--rc geninfo_unexecuted_blocks=1
00:14:49.752  		
00:14:49.752  		'
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:14:49.752  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.752  		--rc genhtml_branch_coverage=1
00:14:49.752  		--rc genhtml_function_coverage=1
00:14:49.752  		--rc genhtml_legend=1
00:14:49.752  		--rc geninfo_all_blocks=1
00:14:49.752  		--rc geninfo_unexecuted_blocks=1
00:14:49.752  		
00:14:49.752  		'
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:14:49.752  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.752  		--rc genhtml_branch_coverage=1
00:14:49.752  		--rc genhtml_function_coverage=1
00:14:49.752  		--rc genhtml_legend=1
00:14:49.752  		--rc geninfo_all_blocks=1
00:14:49.752  		--rc geninfo_unexecuted_blocks=1
00:14:49.752  		
00:14:49.752  		'
00:14:49.752    05:05:03 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:14:49.752  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:14:49.752  		--rc genhtml_branch_coverage=1
00:14:49.752  		--rc genhtml_function_coverage=1
00:14:49.752  		--rc genhtml_legend=1
00:14:49.752  		--rc geninfo_all_blocks=1
00:14:49.752  		--rc geninfo_unexecuted_blocks=1
00:14:49.752  		
00:14:49.752  		'
00:14:49.752   05:05:03 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:14:50.011  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:14:50.270  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:14:51.206    05:05:05 nvme -- nvme/nvme.sh@79 -- # uname
00:14:51.206   05:05:05 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']'
00:14:51.206   05:05:05 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT
00:14:51.206   05:05:05 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE'
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE'
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1073 -- # echo 0
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1075 -- # stubpid=135246
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE
00:14:51.206  Waiting for stub to ready for secondary processes...
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes...
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/135246 ]]
00:14:51.206   05:05:05 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:14:51.465  [2024-11-20 05:05:05.175528] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:14:51.465  [2024-11-20 05:05:05.175832] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ]
00:14:52.399   05:05:06 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:14:52.399   05:05:06 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/135246 ]]
00:14:52.399   05:05:06 nvme -- common/autotest_common.sh@1080 -- # sleep 1s
00:14:52.657  [2024-11-20 05:05:06.389261] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:14:52.657  [2024-11-20 05:05:06.421350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:14:52.657  [2024-11-20 05:05:06.473031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:14:52.657  [2024-11-20 05:05:06.473171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:14:52.657  [2024-11-20 05:05:06.473189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:14:52.657  [2024-11-20 05:05:06.482805] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands
00:14:52.657  [2024-11-20 05:05:06.482909] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller
00:14:52.657  [2024-11-20 05:05:06.493959] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created
00:14:52.657  [2024-11-20 05:05:06.494407] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created
00:14:53.223   05:05:07 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']'
00:14:53.223  done.
00:14:53.223   05:05:07 nvme -- common/autotest_common.sh@1082 -- # echo done.
00:14:53.223   05:05:07 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:14:53.223   05:05:07 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:14:53.223   05:05:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:53.223   05:05:07 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:53.223  ************************************
00:14:53.223  START TEST nvme_reset
00:14:53.223  ************************************
00:14:53.223   05:05:07 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:14:53.790  Initializing NVMe Controllers
00:14:53.790  Skipping QEMU NVMe SSD at 0000:00:10.0
00:14:53.790  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:14:53.790  
00:14:53.790  real	0m0.350s
00:14:53.790  user	0m0.108s
00:14:53.790  sys	0m0.146s
00:14:53.790   05:05:07 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:53.790   05:05:07 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:14:53.790  ************************************
00:14:53.790  END TEST nvme_reset
00:14:53.790  ************************************
00:14:53.790   05:05:07 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:14:53.790   05:05:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:53.790   05:05:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:53.790   05:05:07 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:53.790  ************************************
00:14:53.790  START TEST nvme_identify
00:14:53.790  ************************************
00:14:53.790   05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:14:53.790   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:14:53.790   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:14:53.790   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:14:53.790    05:05:07 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:14:53.790    05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:14:53.790    05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:14:53.790    05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:14:53.790     05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:14:53.790     05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:14:53.790    05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:14:53.790    05:05:07 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:14:53.790   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:14:54.050  [2024-11-20 05:05:07.868274] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 135284 terminated unexpected
00:14:54.050  =====================================================
00:14:54.050  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:54.050  =====================================================
00:14:54.050  Controller Capabilities/Features
00:14:54.050  ================================
00:14:54.050  Vendor ID:                             1b36
00:14:54.050  Subsystem Vendor ID:                   1af4
00:14:54.050  Serial Number:                         12340
00:14:54.050  Model Number:                          QEMU NVMe Ctrl
00:14:54.050  Firmware Version:                      8.0.0
00:14:54.050  Recommended Arb Burst:                 6
00:14:54.050  IEEE OUI Identifier:                   00 54 52
00:14:54.050  Multi-path I/O
00:14:54.050    May have multiple subsystem ports:   No
00:14:54.050    May have multiple controllers:       No
00:14:54.050    Associated with SR-IOV VF:           No
00:14:54.050  Max Data Transfer Size:                524288
00:14:54.050  Max Number of Namespaces:              256
00:14:54.050  Max Number of I/O Queues:              64
00:14:54.050  NVMe Specification Version (VS):       1.4
00:14:54.050  NVMe Specification Version (Identify): 1.4
00:14:54.050  Maximum Queue Entries:                 2048
00:14:54.050  Contiguous Queues Required:            Yes
00:14:54.050  Arbitration Mechanisms Supported
00:14:54.050    Weighted Round Robin:                Not Supported
00:14:54.050    Vendor Specific:                     Not Supported
00:14:54.050  Reset Timeout:                         7500 ms
00:14:54.050  Doorbell Stride:                       4 bytes
00:14:54.050  NVM Subsystem Reset:                   Not Supported
00:14:54.050  Command Sets Supported
00:14:54.050    NVM Command Set:                     Supported
00:14:54.050  Boot Partition:                        Not Supported
00:14:54.050  Memory Page Size Minimum:              4096 bytes
00:14:54.050  Memory Page Size Maximum:              65536 bytes
00:14:54.050  Persistent Memory Region:              Not Supported
00:14:54.050  Optional Asynchronous Events Supported
00:14:54.050    Namespace Attribute Notices:         Supported
00:14:54.050    Firmware Activation Notices:         Not Supported
00:14:54.050    ANA Change Notices:                  Not Supported
00:14:54.050    PLE Aggregate Log Change Notices:    Not Supported
00:14:54.050    LBA Status Info Alert Notices:       Not Supported
00:14:54.050    EGE Aggregate Log Change Notices:    Not Supported
00:14:54.050    Normal NVM Subsystem Shutdown event: Not Supported
00:14:54.050    Zone Descriptor Change Notices:      Not Supported
00:14:54.050    Discovery Log Change Notices:        Not Supported
00:14:54.050  Controller Attributes
00:14:54.050    128-bit Host Identifier:             Not Supported
00:14:54.050    Non-Operational Permissive Mode:     Not Supported
00:14:54.050    NVM Sets:                            Not Supported
00:14:54.050    Read Recovery Levels:                Not Supported
00:14:54.050    Endurance Groups:                    Not Supported
00:14:54.050    Predictable Latency Mode:            Not Supported
00:14:54.050    Traffic Based Keep ALive:            Not Supported
00:14:54.050    Namespace Granularity:               Not Supported
00:14:54.050    SQ Associations:                     Not Supported
00:14:54.050    UUID List:                           Not Supported
00:14:54.050    Multi-Domain Subsystem:              Not Supported
00:14:54.050    Fixed Capacity Management:           Not Supported
00:14:54.050    Variable Capacity Management:        Not Supported
00:14:54.050    Delete Endurance Group:              Not Supported
00:14:54.050    Delete NVM Set:                      Not Supported
00:14:54.050    Extended LBA Formats Supported:      Supported
00:14:54.050    Flexible Data Placement Supported:   Not Supported
00:14:54.050  
00:14:54.050  Controller Memory Buffer Support
00:14:54.050  ================================
00:14:54.050  Supported:                             No
00:14:54.050  
00:14:54.050  Persistent Memory Region Support
00:14:54.050  ================================
00:14:54.050  Supported:                             No
00:14:54.050  
00:14:54.050  Admin Command Set Attributes
00:14:54.050  ============================
00:14:54.050  Security Send/Receive:                 Not Supported
00:14:54.050  Format NVM:                            Supported
00:14:54.050  Firmware Activate/Download:            Not Supported
00:14:54.050  Namespace Management:                  Supported
00:14:54.050  Device Self-Test:                      Not Supported
00:14:54.050  Directives:                            Supported
00:14:54.050  NVMe-MI:                               Not Supported
00:14:54.050  Virtualization Management:             Not Supported
00:14:54.050  Doorbell Buffer Config:                Supported
00:14:54.050  Get LBA Status Capability:             Not Supported
00:14:54.050  Command & Feature Lockdown Capability: Not Supported
00:14:54.050  Abort Command Limit:                   4
00:14:54.050  Async Event Request Limit:             4
00:14:54.050  Number of Firmware Slots:              N/A
00:14:54.050  Firmware Slot 1 Read-Only:             N/A
00:14:54.050  Firmware Activation Without Reset:     N/A
00:14:54.050  Multiple Update Detection Support:     N/A
00:14:54.050  Firmware Update Granularity:           No Information Provided
00:14:54.050  Per-Namespace SMART Log:               Yes
00:14:54.050  Asymmetric Namespace Access Log Page:  Not Supported
00:14:54.050  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:14:54.050  Command Effects Log Page:              Supported
00:14:54.050  Get Log Page Extended Data:            Supported
00:14:54.050  Telemetry Log Pages:                   Not Supported
00:14:54.050  Persistent Event Log Pages:            Not Supported
00:14:54.050  Supported Log Pages Log Page:          May Support
00:14:54.050  Commands Supported & Effects Log Page: Not Supported
00:14:54.050  Feature Identifiers & Effects Log Page:May Support
00:14:54.050  NVMe-MI Commands & Effects Log Page:   May Support
00:14:54.050  Data Area 4 for Telemetry Log:         Not Supported
00:14:54.050  Error Log Page Entries Supported:      1
00:14:54.050  Keep Alive:                            Not Supported
00:14:54.050  
00:14:54.050  NVM Command Set Attributes
00:14:54.050  ==========================
00:14:54.050  Submission Queue Entry Size
00:14:54.050    Max:                       64
00:14:54.050    Min:                       64
00:14:54.050  Completion Queue Entry Size
00:14:54.050    Max:                       16
00:14:54.050    Min:                       16
00:14:54.050  Number of Namespaces:        256
00:14:54.050  Compare Command:             Supported
00:14:54.050  Write Uncorrectable Command: Not Supported
00:14:54.050  Dataset Management Command:  Supported
00:14:54.050  Write Zeroes Command:        Supported
00:14:54.050  Set Features Save Field:     Supported
00:14:54.050  Reservations:                Not Supported
00:14:54.050  Timestamp:                   Supported
00:14:54.050  Copy:                        Supported
00:14:54.050  Volatile Write Cache:        Present
00:14:54.050  Atomic Write Unit (Normal):  1
00:14:54.050  Atomic Write Unit (PFail):   1
00:14:54.050  Atomic Compare & Write Unit: 1
00:14:54.050  Fused Compare & Write:       Not Supported
00:14:54.050  Scatter-Gather List
00:14:54.050    SGL Command Set:           Supported
00:14:54.050    SGL Keyed:                 Not Supported
00:14:54.050    SGL Bit Bucket Descriptor: Not Supported
00:14:54.050    SGL Metadata Pointer:      Not Supported
00:14:54.050    Oversized SGL:             Not Supported
00:14:54.050    SGL Metadata Address:      Not Supported
00:14:54.050    SGL Offset:                Not Supported
00:14:54.050    Transport SGL Data Block:  Not Supported
00:14:54.050  Replay Protected Memory Block:  Not Supported
00:14:54.050  
00:14:54.050  Firmware Slot Information
00:14:54.050  =========================
00:14:54.050  Active slot:                 1
00:14:54.050  Slot 1 Firmware Revision:    1.0
00:14:54.050  
00:14:54.050  
00:14:54.050  Commands Supported and Effects
00:14:54.050  ==============================
00:14:54.050  Admin Commands
00:14:54.050  --------------
00:14:54.050     Delete I/O Submission Queue (00h): Supported 
00:14:54.050     Create I/O Submission Queue (01h): Supported 
00:14:54.050                    Get Log Page (02h): Supported 
00:14:54.050     Delete I/O Completion Queue (04h): Supported 
00:14:54.050     Create I/O Completion Queue (05h): Supported 
00:14:54.050                        Identify (06h): Supported 
00:14:54.050                           Abort (08h): Supported 
00:14:54.050                    Set Features (09h): Supported 
00:14:54.050                    Get Features (0Ah): Supported 
00:14:54.050      Asynchronous Event Request (0Ch): Supported 
00:14:54.050            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:14:54.050                  Directive Send (19h): Supported 
00:14:54.050               Directive Receive (1Ah): Supported 
00:14:54.050       Virtualization Management (1Ch): Supported 
00:14:54.050          Doorbell Buffer Config (7Ch): Supported 
00:14:54.050                      Format NVM (80h): Supported LBA-Change 
00:14:54.050  I/O Commands
00:14:54.050  ------------
00:14:54.051                           Flush (00h): Supported LBA-Change 
00:14:54.051                           Write (01h): Supported LBA-Change 
00:14:54.051                            Read (02h): Supported 
00:14:54.051                         Compare (05h): Supported 
00:14:54.051                    Write Zeroes (08h): Supported LBA-Change 
00:14:54.051              Dataset Management (09h): Supported LBA-Change 
00:14:54.051                         Unknown (0Ch): Supported 
00:14:54.051                         Unknown (12h): Supported 
00:14:54.051                            Copy (19h): Supported LBA-Change 
00:14:54.051                         Unknown (1Dh): Supported LBA-Change 
00:14:54.051  
00:14:54.051  Error Log
00:14:54.051  =========
00:14:54.051  
00:14:54.051  Arbitration
00:14:54.051  ===========
00:14:54.051  Arbitration Burst:           no limit
00:14:54.051  
00:14:54.051  Power Management
00:14:54.051  ================
00:14:54.051  Number of Power States:          1
00:14:54.051  Current Power State:             Power State #0
00:14:54.051  Power State #0:
00:14:54.051    Max Power:                     25.00 W
00:14:54.051    Non-Operational State:         Operational
00:14:54.051    Entry Latency:                 16 microseconds
00:14:54.051    Exit Latency:                  4 microseconds
00:14:54.051    Relative Read Throughput:      0
00:14:54.051    Relative Read Latency:         0
00:14:54.051    Relative Write Throughput:     0
00:14:54.051    Relative Write Latency:        0
00:14:54.051    Idle Power:                     Not Reported
00:14:54.051    Active Power:                   Not Reported
00:14:54.051  Non-Operational Permissive Mode: Not Supported
00:14:54.051  
00:14:54.051  Health Information
00:14:54.051  ==================
00:14:54.051  Critical Warnings:
00:14:54.051    Available Spare Space:     OK
00:14:54.051    Temperature:               OK
00:14:54.051    Device Reliability:        OK
00:14:54.051    Read Only:                 No
00:14:54.051    Volatile Memory Backup:    OK
00:14:54.051  Current Temperature:         323 Kelvin (50 Celsius)
00:14:54.051  Temperature Threshold:       343 Kelvin (70 Celsius)
00:14:54.051  Available Spare:             0%
00:14:54.051  Available Spare Threshold:   0%
00:14:54.051  Life Percentage Used:        0%
00:14:54.051  Data Units Read:             5041
00:14:54.051  Data Units Written:          4787
00:14:54.051  Host Read Commands:          204170
00:14:54.051  Host Write Commands:         218927
00:14:54.051  Controller Busy Time:        0 minutes
00:14:54.051  Power Cycles:                0
00:14:54.051  Power On Hours:              0 hours
00:14:54.051  Unsafe Shutdowns:            0
00:14:54.051  Unrecoverable Media Errors:  0
00:14:54.051  Lifetime Error Log Entries:  0
00:14:54.051  Warning Temperature Time:    0 minutes
00:14:54.051  Critical Temperature Time:   0 minutes
00:14:54.051  
00:14:54.051  Number of Queues
00:14:54.051  ================
00:14:54.051  Number of I/O Submission Queues:      64
00:14:54.051  Number of I/O Completion Queues:      64
00:14:54.051  
00:14:54.051  ZNS Specific Controller Data
00:14:54.051  ============================
00:14:54.051  Zone Append Size Limit:      0
00:14:54.051  
00:14:54.051  
00:14:54.051  Active Namespaces
00:14:54.051  =================
00:14:54.051  Namespace ID:1
00:14:54.051  Error Recovery Timeout:                Unlimited
00:14:54.051  Command Set Identifier:                NVM (00h)
00:14:54.051  Deallocate:                            Supported
00:14:54.051  Deallocated/Unwritten Error:           Supported
00:14:54.051  Deallocated Read Value:                All 0x00
00:14:54.051  Deallocate in Write Zeroes:            Not Supported
00:14:54.051  Deallocated Guard Field:               0xFFFF
00:14:54.051  Flush:                                 Supported
00:14:54.051  Reservation:                           Not Supported
00:14:54.051  Namespace Sharing Capabilities:        Private
00:14:54.051  Size (in LBAs):                        1310720 (5GiB)
00:14:54.051  Capacity (in LBAs):                    1310720 (5GiB)
00:14:54.051  Utilization (in LBAs):                 1310720 (5GiB)
00:14:54.051  Thin Provisioning:                     Not Supported
00:14:54.051  Per-NS Atomic Units:                   No
00:14:54.051  Maximum Single Source Range Length:    128
00:14:54.051  Maximum Copy Length:                   128
00:14:54.051  Maximum Source Range Count:            128
00:14:54.051  NGUID/EUI64 Never Reused:              No
00:14:54.051  Namespace Write Protected:             No
00:14:54.051  Number of LBA Formats:                 8
00:14:54.051  Current LBA Format:                    LBA Format #04
00:14:54.051  LBA Format #00: Data Size:   512  Metadata Size:     0
00:14:54.051  LBA Format #01: Data Size:   512  Metadata Size:     8
00:14:54.051  LBA Format #02: Data Size:   512  Metadata Size:    16
00:14:54.051  LBA Format #03: Data Size:   512  Metadata Size:    64
00:14:54.051  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:14:54.051  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:14:54.051  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:14:54.051  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:14:54.051  
00:14:54.051  NVM Specific Namespace Data
00:14:54.051  ===========================
00:14:54.051  Logical Block Storage Tag Mask:               0
00:14:54.051  Protection Information Capabilities:
00:14:54.051    16b Guard Protection Information Storage Tag Support:  No
00:14:54.051    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:14:54.051    Storage Tag Check Read Support:                        No
00:14:54.051  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.051   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:14:54.051   05:05:07 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:14:54.311  =====================================================
00:14:54.311  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:54.311  =====================================================
00:14:54.311  Controller Capabilities/Features
00:14:54.311  ================================
00:14:54.311  Vendor ID:                             1b36
00:14:54.311  Subsystem Vendor ID:                   1af4
00:14:54.311  Serial Number:                         12340
00:14:54.311  Model Number:                          QEMU NVMe Ctrl
00:14:54.311  Firmware Version:                      8.0.0
00:14:54.311  Recommended Arb Burst:                 6
00:14:54.311  IEEE OUI Identifier:                   00 54 52
00:14:54.311  Multi-path I/O
00:14:54.311    May have multiple subsystem ports:   No
00:14:54.311    May have multiple controllers:       No
00:14:54.311    Associated with SR-IOV VF:           No
00:14:54.311  Max Data Transfer Size:                524288
00:14:54.311  Max Number of Namespaces:              256
00:14:54.311  Max Number of I/O Queues:              64
00:14:54.311  NVMe Specification Version (VS):       1.4
00:14:54.311  NVMe Specification Version (Identify): 1.4
00:14:54.311  Maximum Queue Entries:                 2048
00:14:54.311  Contiguous Queues Required:            Yes
00:14:54.311  Arbitration Mechanisms Supported
00:14:54.311    Weighted Round Robin:                Not Supported
00:14:54.311    Vendor Specific:                     Not Supported
00:14:54.311  Reset Timeout:                         7500 ms
00:14:54.311  Doorbell Stride:                       4 bytes
00:14:54.311  NVM Subsystem Reset:                   Not Supported
00:14:54.311  Command Sets Supported
00:14:54.311    NVM Command Set:                     Supported
00:14:54.311  Boot Partition:                        Not Supported
00:14:54.311  Memory Page Size Minimum:              4096 bytes
00:14:54.311  Memory Page Size Maximum:              65536 bytes
00:14:54.311  Persistent Memory Region:              Not Supported
00:14:54.311  Optional Asynchronous Events Supported
00:14:54.311    Namespace Attribute Notices:         Supported
00:14:54.311    Firmware Activation Notices:         Not Supported
00:14:54.311    ANA Change Notices:                  Not Supported
00:14:54.311    PLE Aggregate Log Change Notices:    Not Supported
00:14:54.311    LBA Status Info Alert Notices:       Not Supported
00:14:54.311    EGE Aggregate Log Change Notices:    Not Supported
00:14:54.311    Normal NVM Subsystem Shutdown event: Not Supported
00:14:54.311    Zone Descriptor Change Notices:      Not Supported
00:14:54.311    Discovery Log Change Notices:        Not Supported
00:14:54.311  Controller Attributes
00:14:54.311    128-bit Host Identifier:             Not Supported
00:14:54.311    Non-Operational Permissive Mode:     Not Supported
00:14:54.311    NVM Sets:                            Not Supported
00:14:54.311    Read Recovery Levels:                Not Supported
00:14:54.311    Endurance Groups:                    Not Supported
00:14:54.311    Predictable Latency Mode:            Not Supported
00:14:54.311    Traffic Based Keep ALive:            Not Supported
00:14:54.311    Namespace Granularity:               Not Supported
00:14:54.311    SQ Associations:                     Not Supported
00:14:54.311    UUID List:                           Not Supported
00:14:54.311    Multi-Domain Subsystem:              Not Supported
00:14:54.311    Fixed Capacity Management:           Not Supported
00:14:54.311    Variable Capacity Management:        Not Supported
00:14:54.311    Delete Endurance Group:              Not Supported
00:14:54.311    Delete NVM Set:                      Not Supported
00:14:54.311    Extended LBA Formats Supported:      Supported
00:14:54.311    Flexible Data Placement Supported:   Not Supported
00:14:54.311  
00:14:54.311  Controller Memory Buffer Support
00:14:54.311  ================================
00:14:54.311  Supported:                             No
00:14:54.311  
00:14:54.311  Persistent Memory Region Support
00:14:54.311  ================================
00:14:54.311  Supported:                             No
00:14:54.311  
00:14:54.311  Admin Command Set Attributes
00:14:54.311  ============================
00:14:54.311  Security Send/Receive:                 Not Supported
00:14:54.311  Format NVM:                            Supported
00:14:54.311  Firmware Activate/Download:            Not Supported
00:14:54.311  Namespace Management:                  Supported
00:14:54.311  Device Self-Test:                      Not Supported
00:14:54.311  Directives:                            Supported
00:14:54.311  NVMe-MI:                               Not Supported
00:14:54.311  Virtualization Management:             Not Supported
00:14:54.311  Doorbell Buffer Config:                Supported
00:14:54.311  Get LBA Status Capability:             Not Supported
00:14:54.311  Command & Feature Lockdown Capability: Not Supported
00:14:54.311  Abort Command Limit:                   4
00:14:54.311  Async Event Request Limit:             4
00:14:54.311  Number of Firmware Slots:              N/A
00:14:54.311  Firmware Slot 1 Read-Only:             N/A
00:14:54.311  Firmware Activation Without Reset:     N/A
00:14:54.311  Multiple Update Detection Support:     N/A
00:14:54.311  Firmware Update Granularity:           No Information Provided
00:14:54.311  Per-Namespace SMART Log:               Yes
00:14:54.311  Asymmetric Namespace Access Log Page:  Not Supported
00:14:54.311  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:14:54.311  Command Effects Log Page:              Supported
00:14:54.312  Get Log Page Extended Data:            Supported
00:14:54.312  Telemetry Log Pages:                   Not Supported
00:14:54.312  Persistent Event Log Pages:            Not Supported
00:14:54.312  Supported Log Pages Log Page:          May Support
00:14:54.312  Commands Supported & Effects Log Page: Not Supported
00:14:54.312  Feature Identifiers & Effects Log Page:May Support
00:14:54.312  NVMe-MI Commands & Effects Log Page:   May Support
00:14:54.312  Data Area 4 for Telemetry Log:         Not Supported
00:14:54.312  Error Log Page Entries Supported:      1
00:14:54.312  Keep Alive:                            Not Supported
00:14:54.312  
00:14:54.312  NVM Command Set Attributes
00:14:54.312  ==========================
00:14:54.312  Submission Queue Entry Size
00:14:54.312    Max:                       64
00:14:54.312    Min:                       64
00:14:54.312  Completion Queue Entry Size
00:14:54.312    Max:                       16
00:14:54.312    Min:                       16
00:14:54.312  Number of Namespaces:        256
00:14:54.312  Compare Command:             Supported
00:14:54.312  Write Uncorrectable Command: Not Supported
00:14:54.312  Dataset Management Command:  Supported
00:14:54.312  Write Zeroes Command:        Supported
00:14:54.312  Set Features Save Field:     Supported
00:14:54.312  Reservations:                Not Supported
00:14:54.312  Timestamp:                   Supported
00:14:54.312  Copy:                        Supported
00:14:54.312  Volatile Write Cache:        Present
00:14:54.312  Atomic Write Unit (Normal):  1
00:14:54.312  Atomic Write Unit (PFail):   1
00:14:54.312  Atomic Compare & Write Unit: 1
00:14:54.312  Fused Compare & Write:       Not Supported
00:14:54.312  Scatter-Gather List
00:14:54.312    SGL Command Set:           Supported
00:14:54.312    SGL Keyed:                 Not Supported
00:14:54.312    SGL Bit Bucket Descriptor: Not Supported
00:14:54.312    SGL Metadata Pointer:      Not Supported
00:14:54.312    Oversized SGL:             Not Supported
00:14:54.312    SGL Metadata Address:      Not Supported
00:14:54.312    SGL Offset:                Not Supported
00:14:54.312    Transport SGL Data Block:  Not Supported
00:14:54.312  Replay Protected Memory Block:  Not Supported
00:14:54.312  
00:14:54.312  Firmware Slot Information
00:14:54.312  =========================
00:14:54.312  Active slot:                 1
00:14:54.312  Slot 1 Firmware Revision:    1.0
00:14:54.312  
00:14:54.312  
00:14:54.312  Commands Supported and Effects
00:14:54.312  ==============================
00:14:54.312  Admin Commands
00:14:54.312  --------------
00:14:54.312     Delete I/O Submission Queue (00h): Supported 
00:14:54.312     Create I/O Submission Queue (01h): Supported 
00:14:54.312                    Get Log Page (02h): Supported 
00:14:54.312     Delete I/O Completion Queue (04h): Supported 
00:14:54.312     Create I/O Completion Queue (05h): Supported 
00:14:54.312                        Identify (06h): Supported 
00:14:54.312                           Abort (08h): Supported 
00:14:54.312                    Set Features (09h): Supported 
00:14:54.312                    Get Features (0Ah): Supported 
00:14:54.312      Asynchronous Event Request (0Ch): Supported 
00:14:54.312            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:14:54.312                  Directive Send (19h): Supported 
00:14:54.312               Directive Receive (1Ah): Supported 
00:14:54.312       Virtualization Management (1Ch): Supported 
00:14:54.312          Doorbell Buffer Config (7Ch): Supported 
00:14:54.312                      Format NVM (80h): Supported LBA-Change 
00:14:54.312  I/O Commands
00:14:54.312  ------------
00:14:54.312                           Flush (00h): Supported LBA-Change 
00:14:54.312                           Write (01h): Supported LBA-Change 
00:14:54.312                            Read (02h): Supported 
00:14:54.312                         Compare (05h): Supported 
00:14:54.312                    Write Zeroes (08h): Supported LBA-Change 
00:14:54.312              Dataset Management (09h): Supported LBA-Change 
00:14:54.312                         Unknown (0Ch): Supported 
00:14:54.312                         Unknown (12h): Supported 
00:14:54.312                            Copy (19h): Supported LBA-Change 
00:14:54.312                         Unknown (1Dh): Supported LBA-Change 
00:14:54.312  
00:14:54.312  Error Log
00:14:54.312  =========
00:14:54.312  
00:14:54.312  Arbitration
00:14:54.312  ===========
00:14:54.312  Arbitration Burst:           no limit
00:14:54.312  
00:14:54.312  Power Management
00:14:54.312  ================
00:14:54.312  Number of Power States:          1
00:14:54.312  Current Power State:             Power State #0
00:14:54.312  Power State #0:
00:14:54.312    Max Power:                     25.00 W
00:14:54.312    Non-Operational State:         Operational
00:14:54.312    Entry Latency:                 16 microseconds
00:14:54.312    Exit Latency:                  4 microseconds
00:14:54.312    Relative Read Throughput:      0
00:14:54.312    Relative Read Latency:         0
00:14:54.312    Relative Write Throughput:     0
00:14:54.312    Relative Write Latency:        0
00:14:54.312    Idle Power:                     Not Reported
00:14:54.312    Active Power:                   Not Reported
00:14:54.312  Non-Operational Permissive Mode: Not Supported
00:14:54.312  
00:14:54.312  Health Information
00:14:54.312  ==================
00:14:54.312  Critical Warnings:
00:14:54.312    Available Spare Space:     OK
00:14:54.312    Temperature:               OK
00:14:54.312    Device Reliability:        OK
00:14:54.312    Read Only:                 No
00:14:54.312    Volatile Memory Backup:    OK
00:14:54.312  Current Temperature:         323 Kelvin (50 Celsius)
00:14:54.312  Temperature Threshold:       343 Kelvin (70 Celsius)
00:14:54.312  Available Spare:             0%
00:14:54.312  Available Spare Threshold:   0%
00:14:54.312  Life Percentage Used:        0%
00:14:54.312  Data Units Read:             5041
00:14:54.312  Data Units Written:          4787
00:14:54.312  Host Read Commands:          204170
00:14:54.312  Host Write Commands:         218927
00:14:54.312  Controller Busy Time:        0 minutes
00:14:54.312  Power Cycles:                0
00:14:54.312  Power On Hours:              0 hours
00:14:54.312  Unsafe Shutdowns:            0
00:14:54.312  Unrecoverable Media Errors:  0
00:14:54.312  Lifetime Error Log Entries:  0
00:14:54.312  Warning Temperature Time:    0 minutes
00:14:54.312  Critical Temperature Time:   0 minutes
00:14:54.312  
00:14:54.312  Number of Queues
00:14:54.312  ================
00:14:54.312  Number of I/O Submission Queues:      64
00:14:54.312  Number of I/O Completion Queues:      64
00:14:54.312  
00:14:54.312  ZNS Specific Controller Data
00:14:54.312  ============================
00:14:54.312  Zone Append Size Limit:      0
00:14:54.312  
00:14:54.312  
00:14:54.312  Active Namespaces
00:14:54.312  =================
00:14:54.312  Namespace ID:1
00:14:54.312  Error Recovery Timeout:                Unlimited
00:14:54.312  Command Set Identifier:                NVM (00h)
00:14:54.312  Deallocate:                            Supported
00:14:54.312  Deallocated/Unwritten Error:           Supported
00:14:54.312  Deallocated Read Value:                All 0x00
00:14:54.312  Deallocate in Write Zeroes:            Not Supported
00:14:54.312  Deallocated Guard Field:               0xFFFF
00:14:54.312  Flush:                                 Supported
00:14:54.312  Reservation:                           Not Supported
00:14:54.312  Namespace Sharing Capabilities:        Private
00:14:54.312  Size (in LBAs):                        1310720 (5GiB)
00:14:54.312  Capacity (in LBAs):                    1310720 (5GiB)
00:14:54.312  Utilization (in LBAs):                 1310720 (5GiB)
00:14:54.312  Thin Provisioning:                     Not Supported
00:14:54.312  Per-NS Atomic Units:                   No
00:14:54.312  Maximum Single Source Range Length:    128
00:14:54.312  Maximum Copy Length:                   128
00:14:54.312  Maximum Source Range Count:            128
00:14:54.312  NGUID/EUI64 Never Reused:              No
00:14:54.312  Namespace Write Protected:             No
00:14:54.312  Number of LBA Formats:                 8
00:14:54.312  Current LBA Format:                    LBA Format #04
00:14:54.312  LBA Format #00: Data Size:   512  Metadata Size:     0
00:14:54.312  LBA Format #01: Data Size:   512  Metadata Size:     8
00:14:54.312  LBA Format #02: Data Size:   512  Metadata Size:    16
00:14:54.312  LBA Format #03: Data Size:   512  Metadata Size:    64
00:14:54.312  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:14:54.312  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:14:54.312  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:14:54.312  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:14:54.312  
00:14:54.312  NVM Specific Namespace Data
00:14:54.312  ===========================
00:14:54.312  Logical Block Storage Tag Mask:               0
00:14:54.312  Protection Information Capabilities:
00:14:54.312    16b Guard Protection Information Storage Tag Support:  No
00:14:54.312    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:14:54.312    Storage Tag Check Read Support:                        No
00:14:54.312  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:14:54.312  
00:14:54.312  real	0m0.689s
00:14:54.312  user	0m0.312s
00:14:54.312  sys	0m0.281s
00:14:54.312   05:05:08 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:54.312  ************************************
00:14:54.312  END TEST nvme_identify
00:14:54.312  ************************************
00:14:54.313   05:05:08 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:14:54.572   05:05:08 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:14:54.572   05:05:08 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:54.572   05:05:08 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:54.572   05:05:08 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:54.572  ************************************
00:14:54.572  START TEST nvme_perf
00:14:54.572  ************************************
00:14:54.572   05:05:08 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:14:54.572   05:05:08 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:14:55.948  Initializing NVMe Controllers
00:14:55.948  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:55.948  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:55.948  Initialization complete. Launching workers.
00:14:55.948  ========================================================
00:14:55.948                                                                             Latency(us)
00:14:55.948  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:55.948  PCIE (0000:00:10.0) NSID 1 from core  0:   69739.71     817.26    1834.48     793.73    5207.55
00:14:55.948  ========================================================
00:14:55.948  Total                                  :   69739.71     817.26    1834.48     793.73    5207.55
00:14:55.948  
00:14:55.948  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:55.948  =================================================================================
00:14:55.948    1.00000% :  1012.829us
00:14:55.948   10.00000% :  1266.036us
00:14:55.948   25.00000% :  1474.560us
00:14:55.948   50.00000% :  1779.898us
00:14:55.948   75.00000% :  2115.025us
00:14:55.948   90.00000% :  2457.600us
00:14:55.949   95.00000% :  2800.175us
00:14:55.949   98.00000% :  3127.855us
00:14:55.949   99.00000% :  3351.273us
00:14:55.949   99.50000% :  3544.902us
00:14:55.949   99.90000% :  4289.629us
00:14:55.949   99.99000% :  5064.145us
00:14:55.949   99.99900% :  5213.091us
00:14:55.949   99.99990% :  5213.091us
00:14:55.949   99.99999% :  5213.091us
00:14:55.949  
00:14:55.949  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:55.949  ==============================================================================
00:14:55.949         Range in us     Cumulative    IO count
00:14:55.949    793.135 -   796.858:    0.0043%  (        3)
00:14:55.949    808.029 -   811.753:    0.0086%  (        3)
00:14:55.949    811.753 -   815.476:    0.0100%  (        1)
00:14:55.949    815.476 -   819.200:    0.0115%  (        1)
00:14:55.949    819.200 -   822.924:    0.0129%  (        1)
00:14:55.949    822.924 -   826.647:    0.0143%  (        1)
00:14:55.949    826.647 -   830.371:    0.0186%  (        3)
00:14:55.949    830.371 -   834.095:    0.0215%  (        2)
00:14:55.949    834.095 -   837.818:    0.0258%  (        3)
00:14:55.949    837.818 -   841.542:    0.0287%  (        2)
00:14:55.949    841.542 -   845.265:    0.0330%  (        3)
00:14:55.949    845.265 -   848.989:    0.0358%  (        2)
00:14:55.949    848.989 -   852.713:    0.0416%  (        4)
00:14:55.949    852.713 -   856.436:    0.0459%  (        3)
00:14:55.949    856.436 -   860.160:    0.0487%  (        2)
00:14:55.949    860.160 -   863.884:    0.0545%  (        4)
00:14:55.949    863.884 -   867.607:    0.0573%  (        2)
00:14:55.949    867.607 -   871.331:    0.0616%  (        3)
00:14:55.949    871.331 -   875.055:    0.0659%  (        3)
00:14:55.949    875.055 -   878.778:    0.0774%  (        8)
00:14:55.949    878.778 -   882.502:    0.0846%  (        5)
00:14:55.949    882.502 -   886.225:    0.0917%  (        5)
00:14:55.949    886.225 -   889.949:    0.1003%  (        6)
00:14:55.949    889.949 -   893.673:    0.1161%  (       11)
00:14:55.949    893.673 -   897.396:    0.1204%  (        3)
00:14:55.949    897.396 -   901.120:    0.1362%  (       11)
00:14:55.949    901.120 -   904.844:    0.1519%  (       11)
00:14:55.949    904.844 -   908.567:    0.1649%  (        9)
00:14:55.949    908.567 -   912.291:    0.1763%  (        8)
00:14:55.949    912.291 -   916.015:    0.1921%  (       11)
00:14:55.949    916.015 -   919.738:    0.2064%  (       10)
00:14:55.949    919.738 -   923.462:    0.2251%  (       13)
00:14:55.949    923.462 -   927.185:    0.2394%  (       10)
00:14:55.949    927.185 -   930.909:    0.2595%  (       14)
00:14:55.949    930.909 -   934.633:    0.2767%  (       12)
00:14:55.949    934.633 -   938.356:    0.2982%  (       15)
00:14:55.949    938.356 -   942.080:    0.3225%  (       17)
00:14:55.949    942.080 -   945.804:    0.3469%  (       17)
00:14:55.949    945.804 -   949.527:    0.3684%  (       15)
00:14:55.949    949.527 -   953.251:    0.4071%  (       27)
00:14:55.949    953.251 -   960.698:    0.4774%  (       49)
00:14:55.949    960.698 -   968.145:    0.5404%  (       44)
00:14:55.949    968.145 -   975.593:    0.6135%  (       51)
00:14:55.949    975.593 -   983.040:    0.6881%  (       52)
00:14:55.949    983.040 -   990.487:    0.7540%  (       46)
00:14:55.949    990.487 -   997.935:    0.8386%  (       59)
00:14:55.949    997.935 -  1005.382:    0.9189%  (       56)
00:14:55.949   1005.382 -  1012.829:    1.0206%  (       71)
00:14:55.949   1012.829 -  1020.276:    1.1239%  (       72)
00:14:55.949   1020.276 -  1027.724:    1.2199%  (       67)
00:14:55.949   1027.724 -  1035.171:    1.3245%  (       73)
00:14:55.949   1035.171 -  1042.618:    1.4550%  (       91)
00:14:55.949   1042.618 -  1050.065:    1.5768%  (       85)
00:14:55.949   1050.065 -  1057.513:    1.7130%  (       95)
00:14:55.949   1057.513 -  1064.960:    1.8564%  (      100)
00:14:55.949   1064.960 -  1072.407:    2.0169%  (      112)
00:14:55.949   1072.407 -  1079.855:    2.1660%  (      104)
00:14:55.949   1079.855 -  1087.302:    2.3237%  (      110)
00:14:55.949   1087.302 -  1094.749:    2.5530%  (      160)
00:14:55.949   1094.749 -  1102.196:    2.6964%  (      100)
00:14:55.949   1102.196 -  1109.644:    2.9057%  (      146)
00:14:55.949   1109.644 -  1117.091:    3.1092%  (      142)
00:14:55.949   1117.091 -  1124.538:    3.3243%  (      150)
00:14:55.949   1124.538 -  1131.985:    3.5479%  (      156)
00:14:55.949   1131.985 -  1139.433:    3.7901%  (      169)
00:14:55.949   1139.433 -  1146.880:    4.0453%  (      178)
00:14:55.949   1146.880 -  1154.327:    4.3392%  (      205)
00:14:55.949   1154.327 -  1161.775:    4.5986%  (      181)
00:14:55.949   1161.775 -  1169.222:    4.9283%  (      230)
00:14:55.949   1169.222 -  1176.669:    5.2480%  (      223)
00:14:55.949   1176.669 -  1184.116:    5.6150%  (      256)
00:14:55.949   1184.116 -  1191.564:    5.9217%  (      214)
00:14:55.949   1191.564 -  1199.011:    6.2944%  (      260)
00:14:55.949   1199.011 -  1206.458:    6.6772%  (      267)
00:14:55.949   1206.458 -  1213.905:    7.1001%  (      295)
00:14:55.949   1213.905 -  1221.353:    7.5315%  (      301)
00:14:55.949   1221.353 -  1228.800:    7.9759%  (      310)
00:14:55.949   1228.800 -  1236.247:    8.4303%  (      317)
00:14:55.949   1236.247 -  1243.695:    8.8933%  (      323)
00:14:55.949   1243.695 -  1251.142:    9.2919%  (      278)
00:14:55.949   1251.142 -  1258.589:    9.7377%  (      311)
00:14:55.949   1258.589 -  1266.036:   10.2150%  (      333)
00:14:55.949   1266.036 -  1273.484:   10.6508%  (      304)
00:14:55.949   1273.484 -  1280.931:   11.1368%  (      339)
00:14:55.949   1280.931 -  1288.378:   11.5983%  (      322)
00:14:55.949   1288.378 -  1295.825:   12.0843%  (      339)
00:14:55.949   1295.825 -  1303.273:   12.5702%  (      339)
00:14:55.949   1303.273 -  1310.720:   13.0992%  (      369)
00:14:55.949   1310.720 -  1318.167:   13.5995%  (      349)
00:14:55.949   1318.167 -  1325.615:   14.1256%  (      367)
00:14:55.949   1325.615 -  1333.062:   14.6158%  (      342)
00:14:55.949   1333.062 -  1340.509:   15.1606%  (      380)
00:14:55.949   1340.509 -  1347.956:   15.6823%  (      364)
00:14:55.949   1347.956 -  1355.404:   16.1955%  (      358)
00:14:55.949   1355.404 -  1362.851:   16.7718%  (      402)
00:14:55.949   1362.851 -  1370.298:   17.3050%  (      372)
00:14:55.949   1370.298 -  1377.745:   17.8899%  (      408)
00:14:55.949   1377.745 -  1385.193:   18.4346%  (      380)
00:14:55.949   1385.193 -  1392.640:   18.9951%  (      391)
00:14:55.949   1392.640 -  1400.087:   19.5628%  (      396)
00:14:55.949   1400.087 -  1407.535:   20.1175%  (      387)
00:14:55.949   1407.535 -  1414.982:   20.6866%  (      397)
00:14:55.950   1414.982 -  1422.429:   21.2414%  (      387)
00:14:55.950   1422.429 -  1429.876:   21.8334%  (      413)
00:14:55.950   1429.876 -  1437.324:   22.4341%  (      419)
00:14:55.950   1437.324 -  1444.771:   23.0118%  (      403)
00:14:55.950   1444.771 -  1452.218:   23.6153%  (      421)
00:14:55.950   1452.218 -  1459.665:   24.1829%  (      396)
00:14:55.950   1459.665 -  1467.113:   24.7893%  (      423)
00:14:55.950   1467.113 -  1474.560:   25.3756%  (      409)
00:14:55.950   1474.560 -  1482.007:   25.9991%  (      435)
00:14:55.950   1482.007 -  1489.455:   26.5797%  (      405)
00:14:55.950   1489.455 -  1496.902:   27.2119%  (      441)
00:14:55.950   1496.902 -  1504.349:   27.8082%  (      416)
00:14:55.950   1504.349 -  1511.796:   28.4260%  (      431)
00:14:55.950   1511.796 -  1519.244:   29.0123%  (      409)
00:14:55.950   1519.244 -  1526.691:   29.6101%  (      417)
00:14:55.950   1526.691 -  1534.138:   30.1964%  (      409)
00:14:55.950   1534.138 -  1541.585:   30.7856%  (      411)
00:14:55.950   1541.585 -  1549.033:   31.4048%  (      432)
00:14:55.950   1549.033 -  1556.480:   31.9983%  (      414)
00:14:55.950   1556.480 -  1563.927:   32.5860%  (      410)
00:14:55.950   1563.927 -  1571.375:   33.1981%  (      427)
00:14:55.950   1571.375 -  1578.822:   33.7744%  (      402)
00:14:55.950   1578.822 -  1586.269:   34.4094%  (      443)
00:14:55.950   1586.269 -  1593.716:   35.0029%  (      414)
00:14:55.950   1593.716 -  1601.164:   35.6350%  (      441)
00:14:55.950   1601.164 -  1608.611:   36.2084%  (      400)
00:14:55.950   1608.611 -  1616.058:   36.8134%  (      422)
00:14:55.950   1616.058 -  1623.505:   37.4240%  (      426)
00:14:55.950   1623.505 -  1630.953:   38.0232%  (      418)
00:14:55.950   1630.953 -  1638.400:   38.6239%  (      419)
00:14:55.950   1638.400 -  1645.847:   39.2388%  (      429)
00:14:55.950   1645.847 -  1653.295:   39.8538%  (      429)
00:14:55.950   1653.295 -  1660.742:   40.4429%  (      411)
00:14:55.950   1660.742 -  1668.189:   41.0938%  (      454)
00:14:55.950   1668.189 -  1675.636:   41.6829%  (      411)
00:14:55.950   1675.636 -  1683.084:   42.3452%  (      462)
00:14:55.950   1683.084 -  1690.531:   42.9401%  (      415)
00:14:55.950   1690.531 -  1697.978:   43.6095%  (      467)
00:14:55.950   1697.978 -  1705.425:   44.2044%  (      415)
00:14:55.950   1705.425 -  1712.873:   44.8294%  (      436)
00:14:55.950   1712.873 -  1720.320:   45.4573%  (      438)
00:14:55.950   1720.320 -  1727.767:   46.0923%  (      443)
00:14:55.950   1727.767 -  1735.215:   46.6915%  (      418)
00:14:55.950   1735.215 -  1742.662:   47.3093%  (      431)
00:14:55.950   1742.662 -  1750.109:   47.9415%  (      441)
00:14:55.950   1750.109 -  1757.556:   48.5206%  (      404)
00:14:55.950   1757.556 -  1765.004:   49.1886%  (      466)
00:14:55.950   1765.004 -  1772.451:   49.7362%  (      382)
00:14:55.950   1772.451 -  1779.898:   50.4200%  (      477)
00:14:55.950   1779.898 -  1787.345:   51.0034%  (      407)
00:14:55.950   1787.345 -  1794.793:   51.6155%  (      427)
00:14:55.950   1794.793 -  1802.240:   52.2276%  (      427)
00:14:55.950   1802.240 -  1809.687:   52.8698%  (      448)
00:14:55.950   1809.687 -  1817.135:   53.4791%  (      425)
00:14:55.950   1817.135 -  1824.582:   54.0725%  (      414)
00:14:55.950   1824.582 -  1832.029:   54.7319%  (      460)
00:14:55.950   1832.029 -  1839.476:   55.2881%  (      388)
00:14:55.950   1839.476 -  1846.924:   55.9289%  (      447)
00:14:55.950   1846.924 -  1854.371:   56.5052%  (      402)
00:14:55.950   1854.371 -  1861.818:   57.1201%  (      429)
00:14:55.950   1861.818 -  1869.265:   57.6950%  (      401)
00:14:55.950   1869.265 -  1876.713:   58.2899%  (      415)
00:14:55.950   1876.713 -  1884.160:   58.8833%  (      414)
00:14:55.950   1884.160 -  1891.607:   59.4682%  (      408)
00:14:55.950   1891.607 -  1899.055:   60.0487%  (      405)
00:14:55.950   1899.055 -  1906.502:   60.6637%  (      429)
00:14:55.950   1906.502 -  1921.396:   61.8191%  (      806)
00:14:55.950   1921.396 -  1936.291:   62.9802%  (      810)
00:14:55.950   1936.291 -  1951.185:   64.1098%  (      788)
00:14:55.950   1951.185 -  1966.080:   65.2566%  (      800)
00:14:55.950   1966.080 -  1980.975:   66.3991%  (      797)
00:14:55.950   1980.975 -  1995.869:   67.4814%  (      755)
00:14:55.950   1995.869 -  2010.764:   68.5350%  (      735)
00:14:55.950   2010.764 -  2025.658:   69.6144%  (      753)
00:14:55.950   2025.658 -  2040.553:   70.6365%  (      713)
00:14:55.950   2040.553 -  2055.447:   71.6700%  (      721)
00:14:55.950   2055.447 -  2070.342:   72.6849%  (      708)
00:14:55.950   2070.342 -  2085.236:   73.6769%  (      692)
00:14:55.950   2085.236 -  2100.131:   74.6631%  (      688)
00:14:55.950   2100.131 -  2115.025:   75.5892%  (      646)
00:14:55.950   2115.025 -  2129.920:   76.5367%  (      661)
00:14:55.950   2129.920 -  2144.815:   77.4642%  (      647)
00:14:55.950   2144.815 -  2159.709:   78.3701%  (      632)
00:14:55.950   2159.709 -  2174.604:   79.2446%  (      610)
00:14:55.950   2174.604 -  2189.498:   80.0917%  (      591)
00:14:55.950   2189.498 -  2204.393:   80.9232%  (      580)
00:14:55.950   2204.393 -  2219.287:   81.7159%  (      553)
00:14:55.950   2219.287 -  2234.182:   82.4814%  (      534)
00:14:55.950   2234.182 -  2249.076:   83.2268%  (      520)
00:14:55.950   2249.076 -  2263.971:   83.9521%  (      506)
00:14:55.950   2263.971 -  2278.865:   84.5915%  (      446)
00:14:55.950   2278.865 -  2293.760:   85.2638%  (      469)
00:14:55.950   2293.760 -  2308.655:   85.8773%  (      428)
00:14:55.950   2308.655 -  2323.549:   86.4435%  (      395)
00:14:55.950   2323.549 -  2338.444:   87.0169%  (      400)
00:14:55.950   2338.444 -  2353.338:   87.5473%  (      370)
00:14:55.950   2353.338 -  2368.233:   88.0232%  (      332)
00:14:55.950   2368.233 -  2383.127:   88.4504%  (      298)
00:14:55.950   2383.127 -  2398.022:   88.8231%  (      260)
00:14:55.951   2398.022 -  2412.916:   89.2015%  (      264)
00:14:55.951   2412.916 -  2427.811:   89.5399%  (      236)
00:14:55.951   2427.811 -  2442.705:   89.8624%  (      225)
00:14:55.951   2442.705 -  2457.600:   90.1448%  (      197)
00:14:55.951   2457.600 -  2472.495:   90.4343%  (      202)
00:14:55.951   2472.495 -  2487.389:   90.6952%  (      182)
00:14:55.951   2487.389 -  2502.284:   90.9490%  (      177)
00:14:55.951   2502.284 -  2517.178:   91.1898%  (      168)
00:14:55.951   2517.178 -  2532.073:   91.4106%  (      154)
00:14:55.951   2532.073 -  2546.967:   91.6241%  (      149)
00:14:55.951   2546.967 -  2561.862:   91.8406%  (      151)
00:14:55.951   2561.862 -  2576.756:   92.0485%  (      145)
00:14:55.951   2576.756 -  2591.651:   92.2549%  (      144)
00:14:55.951   2591.651 -  2606.545:   92.4556%  (      140)
00:14:55.951   2606.545 -  2621.440:   92.6591%  (      142)
00:14:55.951   2621.440 -  2636.335:   92.8598%  (      140)
00:14:55.951   2636.335 -  2651.229:   93.0562%  (      137)
00:14:55.951   2651.229 -  2666.124:   93.2583%  (      141)
00:14:55.951   2666.124 -  2681.018:   93.4619%  (      142)
00:14:55.951   2681.018 -  2695.913:   93.6597%  (      138)
00:14:55.951   2695.913 -  2710.807:   93.8532%  (      135)
00:14:55.951   2710.807 -  2725.702:   94.0453%  (      134)
00:14:55.951   2725.702 -  2740.596:   94.2431%  (      138)
00:14:55.951   2740.596 -  2755.491:   94.4381%  (      136)
00:14:55.951   2755.491 -  2770.385:   94.6388%  (      140)
00:14:55.951   2770.385 -  2785.280:   94.8351%  (      137)
00:14:55.951   2785.280 -  2800.175:   95.0373%  (      141)
00:14:55.951   2800.175 -  2815.069:   95.2150%  (      124)
00:14:55.951   2815.069 -  2829.964:   95.4114%  (      137)
00:14:55.951   2829.964 -  2844.858:   95.5978%  (      130)
00:14:55.951   2844.858 -  2859.753:   95.7812%  (      128)
00:14:55.951   2859.753 -  2874.647:   95.9604%  (      125)
00:14:55.951   2874.647 -  2889.542:   96.1253%  (      115)
00:14:55.951   2889.542 -  2904.436:   96.3045%  (      125)
00:14:55.951   2904.436 -  2919.331:   96.4765%  (      120)
00:14:55.951   2919.331 -  2934.225:   96.6270%  (      105)
00:14:55.951   2934.225 -  2949.120:   96.7804%  (      107)
00:14:55.951   2949.120 -  2964.015:   96.9252%  (      101)
00:14:55.951   2964.015 -  2978.909:   97.0642%  (       97)
00:14:55.951   2978.909 -  2993.804:   97.1861%  (       85)
00:14:55.951   2993.804 -  3008.698:   97.3007%  (       80)
00:14:55.951   3008.698 -  3023.593:   97.4054%  (       73)
00:14:55.951   3023.593 -  3038.487:   97.5143%  (       76)
00:14:55.951   3038.487 -  3053.382:   97.6175%  (       72)
00:14:55.951   3053.382 -  3068.276:   97.7107%  (       65)
00:14:55.951   3068.276 -  3083.171:   97.7953%  (       59)
00:14:55.951   3083.171 -  3098.065:   97.8842%  (       62)
00:14:55.951   3098.065 -  3112.960:   97.9673%  (       58)
00:14:55.951   3112.960 -  3127.855:   98.0519%  (       59)
00:14:55.951   3127.855 -  3142.749:   98.1307%  (       55)
00:14:55.951   3142.749 -  3157.644:   98.2024%  (       50)
00:14:55.951   3157.644 -  3172.538:   98.2712%  (       48)
00:14:55.951   3172.538 -  3187.433:   98.3443%  (       51)
00:14:55.951   3187.433 -  3202.327:   98.4174%  (       51)
00:14:55.951   3202.327 -  3217.222:   98.4819%  (       45)
00:14:55.951   3217.222 -  3232.116:   98.5507%  (       48)
00:14:55.951   3232.116 -  3247.011:   98.6181%  (       47)
00:14:55.951   3247.011 -  3261.905:   98.6783%  (       42)
00:14:55.951   3261.905 -  3276.800:   98.7414%  (       44)
00:14:55.951   3276.800 -  3291.695:   98.8073%  (       46)
00:14:55.951   3291.695 -  3306.589:   98.8690%  (       43)
00:14:55.951   3306.589 -  3321.484:   98.9321%  (       44)
00:14:55.951   3321.484 -  3336.378:   98.9808%  (       34)
00:14:55.951   3336.378 -  3351.273:   99.0338%  (       37)
00:14:55.951   3351.273 -  3366.167:   99.0840%  (       35)
00:14:55.951   3366.167 -  3381.062:   99.1385%  (       38)
00:14:55.951   3381.062 -  3395.956:   99.1915%  (       37)
00:14:55.951   3395.956 -  3410.851:   99.2374%  (       32)
00:14:55.951   3410.851 -  3425.745:   99.2804%  (       30)
00:14:55.951   3425.745 -  3440.640:   99.3191%  (       27)
00:14:55.951   3440.640 -  3455.535:   99.3607%  (       29)
00:14:55.951   3455.535 -  3470.429:   99.3965%  (       25)
00:14:55.951   3470.429 -  3485.324:   99.4295%  (       23)
00:14:55.951   3485.324 -  3500.218:   99.4524%  (       16)
00:14:55.951   3500.218 -  3515.113:   99.4768%  (       17)
00:14:55.951   3515.113 -  3530.007:   99.4897%  (        9)
00:14:55.951   3530.007 -  3544.902:   99.5069%  (       12)
00:14:55.951   3544.902 -  3559.796:   99.5198%  (        9)
00:14:55.951   3559.796 -  3574.691:   99.5341%  (       10)
00:14:55.951   3574.691 -  3589.585:   99.5485%  (       10)
00:14:55.951   3589.585 -  3604.480:   99.5614%  (        9)
00:14:55.951   3604.480 -  3619.375:   99.5757%  (       10)
00:14:55.951   3619.375 -  3634.269:   99.5886%  (        9)
00:14:55.951   3634.269 -  3649.164:   99.6044%  (       11)
00:14:55.951   3649.164 -  3664.058:   99.6187%  (       10)
00:14:55.951   3664.058 -  3678.953:   99.6316%  (        9)
00:14:55.951   3678.953 -  3693.847:   99.6431%  (        8)
00:14:55.951   3693.847 -  3708.742:   99.6531%  (        7)
00:14:55.951   3708.742 -  3723.636:   99.6646%  (        8)
00:14:55.951   3723.636 -  3738.531:   99.6760%  (        8)
00:14:55.951   3738.531 -  3753.425:   99.6861%  (        7)
00:14:55.951   3753.425 -  3768.320:   99.6961%  (        7)
00:14:55.951   3768.320 -  3783.215:   99.7047%  (        6)
00:14:55.951   3783.215 -  3798.109:   99.7133%  (        6)
00:14:55.951   3798.109 -  3813.004:   99.7233%  (        7)
00:14:55.951   3813.004 -  3842.793:   99.7434%  (       14)
00:14:55.951   3842.793 -  3872.582:   99.7606%  (       12)
00:14:55.951   3872.582 -  3902.371:   99.7721%  (        8)
00:14:55.951   3902.371 -  3932.160:   99.7864%  (       10)
00:14:55.951   3932.160 -  3961.949:   99.7950%  (        6)
00:14:55.951   3961.949 -  3991.738:   99.8050%  (        7)
00:14:55.951   3991.738 -  4021.527:   99.8165%  (        8)
00:14:55.951   4021.527 -  4051.316:   99.8294%  (        9)
00:14:55.952   4051.316 -  4081.105:   99.8423%  (        9)
00:14:55.952   4081.105 -  4110.895:   99.8538%  (        8)
00:14:55.952   4110.895 -  4140.684:   99.8638%  (        7)
00:14:55.952   4140.684 -  4170.473:   99.8710%  (        5)
00:14:55.952   4170.473 -  4200.262:   99.8796%  (        6)
00:14:55.952   4200.262 -  4230.051:   99.8911%  (        8)
00:14:55.952   4230.051 -  4259.840:   99.8997%  (        6)
00:14:55.952   4259.840 -  4289.629:   99.9068%  (        5)
00:14:55.952   4289.629 -  4319.418:   99.9154%  (        6)
00:14:55.952   4319.418 -  4349.207:   99.9226%  (        5)
00:14:55.952   4349.207 -  4378.996:   99.9326%  (        7)
00:14:55.952   4378.996 -  4408.785:   99.9384%  (        4)
00:14:55.952   4408.785 -  4438.575:   99.9427%  (        3)
00:14:55.952   4438.575 -  4468.364:   99.9470%  (        3)
00:14:55.952   4468.364 -  4498.153:   99.9498%  (        2)
00:14:55.952   4498.153 -  4527.942:   99.9513%  (        1)
00:14:55.952   4527.942 -  4557.731:   99.9527%  (        1)
00:14:55.952   4557.731 -  4587.520:   99.9556%  (        2)
00:14:55.952   4587.520 -  4617.309:   99.9570%  (        1)
00:14:55.952   4617.309 -  4647.098:   99.9599%  (        2)
00:14:55.952   4647.098 -  4676.887:   99.9613%  (        1)
00:14:55.952   4676.887 -  4706.676:   99.9642%  (        2)
00:14:55.952   4706.676 -  4736.465:   99.9670%  (        2)
00:14:55.952   4736.465 -  4766.255:   99.9685%  (        1)
00:14:55.952   4766.255 -  4796.044:   99.9713%  (        2)
00:14:55.952   4796.044 -  4825.833:   99.9728%  (        1)
00:14:55.952   4825.833 -  4855.622:   99.9756%  (        2)
00:14:55.952   4855.622 -  4885.411:   99.9785%  (        2)
00:14:55.952   4885.411 -  4915.200:   99.9799%  (        1)
00:14:55.952   4915.200 -  4944.989:   99.9828%  (        2)
00:14:55.952   4944.989 -  4974.778:   99.9842%  (        1)
00:14:55.952   4974.778 -  5004.567:   99.9871%  (        2)
00:14:55.952   5004.567 -  5034.356:   99.9885%  (        1)
00:14:55.952   5034.356 -  5064.145:   99.9914%  (        2)
00:14:55.952   5064.145 -  5093.935:   99.9928%  (        1)
00:14:55.952   5093.935 -  5123.724:   99.9957%  (        2)
00:14:55.952   5123.724 -  5153.513:   99.9986%  (        2)
00:14:55.952   5183.302 -  5213.091:  100.0000%  (        1)
00:14:55.952  
00:14:55.952   05:05:09 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:14:57.327  Initializing NVMe Controllers
00:14:57.328  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:14:57.328  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:14:57.328  Initialization complete. Launching workers.
00:14:57.328  ========================================================
00:14:57.328                                                                             Latency(us)
00:14:57.328  Device Information                     :       IOPS      MiB/s    Average        min        max
00:14:57.328  PCIE (0000:00:10.0) NSID 1 from core  0:   82780.42     970.08    1546.03     535.25   11434.27
00:14:57.328  ========================================================
00:14:57.328  Total                                  :   82780.42     970.08    1546.03     535.25   11434.27
00:14:57.328  
00:14:57.328  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:57.328  =================================================================================
00:14:57.328    1.00000% :   852.713us
00:14:57.328   10.00000% :  1176.669us
00:14:57.328   25.00000% :  1310.720us
00:14:57.328   50.00000% :  1489.455us
00:14:57.328   75.00000% :  1712.873us
00:14:57.328   90.00000% :  1951.185us
00:14:57.328   95.00000% :  2144.815us
00:14:57.328   98.00000% :  2398.022us
00:14:57.328   99.00000% :  2636.335us
00:14:57.328   99.50000% :  3068.276us
00:14:57.328   99.90000% :  9413.353us
00:14:57.328   99.99000% : 11379.433us
00:14:57.328   99.99900% : 11439.011us
00:14:57.328   99.99990% : 11439.011us
00:14:57.328   99.99999% : 11439.011us
00:14:57.328  
00:14:57.328  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:14:57.328  ==============================================================================
00:14:57.328         Range in us     Cumulative    IO count
00:14:57.328    532.480 -   536.204:    0.0012%  (        1)
00:14:57.328    536.204 -   539.927:    0.0024%  (        1)
00:14:57.328    554.822 -   558.545:    0.0036%  (        1)
00:14:57.328    558.545 -   562.269:    0.0060%  (        2)
00:14:57.328    562.269 -   565.993:    0.0254%  (       16)
00:14:57.328    565.993 -   569.716:    0.0266%  (        1)
00:14:57.328    569.716 -   573.440:    0.0278%  (        1)
00:14:57.328    577.164 -   580.887:    0.0302%  (        2)
00:14:57.328    580.887 -   584.611:    0.0326%  (        2)
00:14:57.328    584.611 -   588.335:    0.0338%  (        1)
00:14:57.328    588.335 -   592.058:    0.0568%  (       19)
00:14:57.328    592.058 -   595.782:    0.0701%  (       11)
00:14:57.328    595.782 -   599.505:    0.0809%  (        9)
00:14:57.328    599.505 -   603.229:    0.0882%  (        6)
00:14:57.328    603.229 -   606.953:    0.1208%  (       27)
00:14:57.328    606.953 -   610.676:    0.1413%  (       17)
00:14:57.328    610.676 -   614.400:    0.1582%  (       14)
00:14:57.328    614.400 -   618.124:    0.1703%  (       10)
00:14:57.328    618.124 -   621.847:    0.2162%  (       38)
00:14:57.328    621.847 -   625.571:    0.2234%  (        6)
00:14:57.328    625.571 -   629.295:    0.2295%  (        5)
00:14:57.328    633.018 -   636.742:    0.2355%  (        5)
00:14:57.328    636.742 -   640.465:    0.2416%  (        5)
00:14:57.328    640.465 -   644.189:    0.2440%  (        2)
00:14:57.328    644.189 -   647.913:    0.2500%  (        5)
00:14:57.328    647.913 -   651.636:    0.2826%  (       27)
00:14:57.328    651.636 -   655.360:    0.2935%  (        9)
00:14:57.328    655.360 -   659.084:    0.3249%  (       26)
00:14:57.328    659.084 -   662.807:    0.3744%  (       41)
00:14:57.328    662.807 -   666.531:    0.4300%  (       46)
00:14:57.328    666.531 -   670.255:    0.4397%  (        8)
00:14:57.328    670.255 -   673.978:    0.4964%  (       47)
00:14:57.328    673.978 -   677.702:    0.5037%  (        6)
00:14:57.328    677.702 -   681.425:    0.5157%  (       10)
00:14:57.328    681.425 -   685.149:    0.5411%  (       21)
00:14:57.328    685.149 -   688.873:    0.5544%  (       11)
00:14:57.328    688.873 -   692.596:    0.5628%  (        7)
00:14:57.328    692.596 -   696.320:    0.5737%  (        9)
00:14:57.328    696.320 -   700.044:    0.6027%  (       24)
00:14:57.328    700.044 -   703.767:    0.6196%  (       14)
00:14:57.328    703.767 -   707.491:    0.6293%  (        8)
00:14:57.328    707.491 -   711.215:    0.6414%  (       10)
00:14:57.328    711.215 -   714.938:    0.6486%  (        6)
00:14:57.328    714.938 -   718.662:    0.6510%  (        2)
00:14:57.328    718.662 -   722.385:    0.6571%  (        5)
00:14:57.328    722.385 -   726.109:    0.6619%  (        4)
00:14:57.328    726.109 -   729.833:    0.6691%  (        6)
00:14:57.328    729.833 -   733.556:    0.6716%  (        2)
00:14:57.328    733.556 -   737.280:    0.6812%  (        8)
00:14:57.328    737.280 -   741.004:    0.6885%  (        6)
00:14:57.328    741.004 -   744.727:    0.7102%  (       18)
00:14:57.328    744.727 -   748.451:    0.7199%  (        8)
00:14:57.328    748.451 -   752.175:    0.7247%  (        4)
00:14:57.328    752.175 -   755.898:    0.7295%  (        4)
00:14:57.328    755.898 -   759.622:    0.7356%  (        5)
00:14:57.328    759.622 -   763.345:    0.7428%  (        6)
00:14:57.328    763.345 -   767.069:    0.7513%  (        7)
00:14:57.328    767.069 -   770.793:    0.7573%  (        5)
00:14:57.328    770.793 -   774.516:    0.7718%  (       12)
00:14:57.328    774.516 -   778.240:    0.7754%  (        3)
00:14:57.328    778.240 -   781.964:    0.7863%  (        9)
00:14:57.328    781.964 -   785.687:    0.7972%  (        9)
00:14:57.328    785.687 -   789.411:    0.8056%  (        7)
00:14:57.328    789.411 -   793.135:    0.8117%  (        5)
00:14:57.328    793.135 -   796.858:    0.8165%  (        4)
00:14:57.328    796.858 -   800.582:    0.8237%  (        6)
00:14:57.328    800.582 -   804.305:    0.8334%  (        8)
00:14:57.328    804.305 -   808.029:    0.8455%  (       10)
00:14:57.328    808.029 -   811.753:    0.8648%  (       16)
00:14:57.328    811.753 -   815.476:    0.8769%  (       10)
00:14:57.328    815.476 -   819.200:    0.8902%  (       11)
00:14:57.328    819.200 -   822.924:    0.8974%  (        6)
00:14:57.328    822.924 -   826.647:    0.9035%  (        5)
00:14:57.328    826.647 -   830.371:    0.9155%  (       10)
00:14:57.328    830.371 -   834.095:    0.9300%  (       12)
00:14:57.328    834.095 -   837.818:    0.9506%  (       17)
00:14:57.328    837.818 -   841.542:    0.9626%  (       10)
00:14:57.328    841.542 -   845.265:    0.9747%  (       10)
00:14:57.328    845.265 -   848.989:    0.9892%  (       12)
00:14:57.328    848.989 -   852.713:    1.0001%  (        9)
00:14:57.328    852.713 -   856.436:    1.0110%  (        9)
00:14:57.328    856.436 -   860.160:    1.0146%  (        3)
00:14:57.328    860.160 -   863.884:    1.0303%  (       13)
00:14:57.328    863.884 -   867.607:    1.0399%  (        8)
00:14:57.328    867.607 -   871.331:    1.0460%  (        5)
00:14:57.328    871.331 -   875.055:    1.0581%  (       10)
00:14:57.328    875.055 -   878.778:    1.0701%  (       10)
00:14:57.328    878.778 -   882.502:    1.0798%  (        8)
00:14:57.328    882.502 -   886.225:    1.0955%  (       13)
00:14:57.328    886.225 -   889.949:    1.1148%  (       16)
00:14:57.328    889.949 -   893.673:    1.1329%  (       15)
00:14:57.328    893.673 -   897.396:    1.1474%  (       12)
00:14:57.328    897.396 -   901.120:    1.1619%  (       12)
00:14:57.328    901.120 -   904.844:    1.1752%  (       11)
00:14:57.328    904.844 -   908.567:    1.1982%  (       19)
00:14:57.328    908.567 -   912.291:    1.2078%  (        8)
00:14:57.328    912.291 -   916.015:    1.2223%  (       12)
00:14:57.328    916.015 -   919.738:    1.2404%  (       15)
00:14:57.328    919.738 -   923.462:    1.2598%  (       16)
00:14:57.328    923.462 -   927.185:    1.2888%  (       24)
00:14:57.328    927.185 -   930.909:    1.3081%  (       16)
00:14:57.328    930.909 -   934.633:    1.3334%  (       21)
00:14:57.328    934.633 -   938.356:    1.3685%  (       29)
00:14:57.328    938.356 -   942.080:    1.4011%  (       27)
00:14:57.328    942.080 -   945.804:    1.4204%  (       16)
00:14:57.328    945.804 -   949.527:    1.4446%  (       20)
00:14:57.328    949.527 -   953.251:    1.4929%  (       40)
00:14:57.328    953.251 -   960.698:    1.5859%  (       77)
00:14:57.328    960.698 -   968.145:    1.7018%  (       96)
00:14:57.328    968.145 -   975.593:    1.8154%  (       94)
00:14:57.328    975.593 -   983.040:    1.9132%  (       81)
00:14:57.328    983.040 -   990.487:    2.0473%  (      111)
00:14:57.328    990.487 -   997.935:    2.1693%  (      101)
00:14:57.328    997.935 -  1005.382:    2.3178%  (      123)
00:14:57.328   1005.382 -  1012.829:    2.4785%  (      133)
00:14:57.328   1012.829 -  1020.276:    2.6584%  (      149)
00:14:57.328   1020.276 -  1027.724:    2.8034%  (      120)
00:14:57.328   1027.724 -  1035.171:    2.9954%  (      159)
00:14:57.328   1035.171 -  1042.618:    3.1778%  (      151)
00:14:57.328   1042.618 -  1050.065:    3.3831%  (      170)
00:14:57.328   1050.065 -  1057.513:    3.6356%  (      209)
00:14:57.328   1057.513 -  1064.960:    3.8626%  (      188)
00:14:57.328   1064.960 -  1072.407:    4.0921%  (      190)
00:14:57.328   1072.407 -  1079.855:    4.3784%  (      237)
00:14:57.328   1079.855 -  1087.302:    4.6816%  (      251)
00:14:57.328   1087.302 -  1094.749:    5.0197%  (      280)
00:14:57.328   1094.749 -  1102.196:    5.3604%  (      282)
00:14:57.328   1102.196 -  1109.644:    5.7203%  (      298)
00:14:57.328   1109.644 -  1117.091:    6.1334%  (      342)
00:14:57.328   1117.091 -  1124.538:    6.5235%  (      323)
00:14:57.328   1124.538 -  1131.985:    6.9813%  (      379)
00:14:57.328   1131.985 -  1139.433:    7.4644%  (      400)
00:14:57.328   1139.433 -  1146.880:    7.9463%  (      399)
00:14:57.328   1146.880 -  1154.327:    8.5104%  (      467)
00:14:57.328   1154.327 -  1161.775:    9.1083%  (      495)
00:14:57.328   1161.775 -  1169.222:    9.7001%  (      490)
00:14:57.328   1169.222 -  1176.669:   10.3040%  (      500)
00:14:57.328   1176.669 -  1184.116:   10.8958%  (      490)
00:14:57.328   1184.116 -  1191.564:   11.5324%  (      527)
00:14:57.329   1191.564 -  1199.011:   12.1822%  (      538)
00:14:57.329   1199.011 -  1206.458:   12.8779%  (      576)
00:14:57.329   1206.458 -  1213.905:   13.6388%  (      630)
00:14:57.329   1213.905 -  1221.353:   14.4034%  (      633)
00:14:57.329   1221.353 -  1228.800:   15.1679%  (      633)
00:14:57.329   1228.800 -  1236.247:   15.9228%  (      625)
00:14:57.329   1236.247 -  1243.695:   16.7393%  (      676)
00:14:57.329   1243.695 -  1251.142:   17.6041%  (      716)
00:14:57.329   1251.142 -  1258.589:   18.4339%  (      687)
00:14:57.329   1258.589 -  1266.036:   19.3313%  (      743)
00:14:57.329   1266.036 -  1273.484:   20.1104%  (      645)
00:14:57.329   1273.484 -  1280.931:   21.1286%  (      843)
00:14:57.329   1280.931 -  1288.378:   22.0381%  (      753)
00:14:57.329   1288.378 -  1295.825:   23.0213%  (      814)
00:14:57.329   1295.825 -  1303.273:   24.0552%  (      856)
00:14:57.329   1303.273 -  1310.720:   25.0202%  (      799)
00:14:57.329   1310.720 -  1318.167:   26.0324%  (      838)
00:14:57.329   1318.167 -  1325.615:   27.1569%  (      931)
00:14:57.329   1325.615 -  1333.062:   28.2294%  (      888)
00:14:57.329   1333.062 -  1340.509:   29.1981%  (      802)
00:14:57.329   1340.509 -  1347.956:   30.2465%  (      868)
00:14:57.329   1347.956 -  1355.404:   31.2696%  (      847)
00:14:57.329   1355.404 -  1362.851:   32.3035%  (      856)
00:14:57.329   1362.851 -  1370.298:   33.3917%  (      901)
00:14:57.329   1370.298 -  1377.745:   34.4425%  (      870)
00:14:57.329   1377.745 -  1385.193:   35.4728%  (      853)
00:14:57.329   1385.193 -  1392.640:   36.4946%  (      846)
00:14:57.329   1392.640 -  1400.087:   37.5068%  (      838)
00:14:57.329   1400.087 -  1407.535:   38.5842%  (      892)
00:14:57.329   1407.535 -  1414.982:   39.6978%  (      922)
00:14:57.329   1414.982 -  1422.429:   40.7873%  (      902)
00:14:57.329   1422.429 -  1429.876:   41.9407%  (      955)
00:14:57.329   1429.876 -  1437.324:   43.1232%  (      979)
00:14:57.329   1437.324 -  1444.771:   44.2574%  (      939)
00:14:57.329   1444.771 -  1452.218:   45.2840%  (      850)
00:14:57.329   1452.218 -  1459.665:   46.3324%  (      868)
00:14:57.329   1459.665 -  1467.113:   47.4243%  (      904)
00:14:57.329   1467.113 -  1474.560:   48.4401%  (      841)
00:14:57.329   1474.560 -  1482.007:   49.3737%  (      773)
00:14:57.329   1482.007 -  1489.455:   50.3062%  (      772)
00:14:57.329   1489.455 -  1496.902:   51.2930%  (      817)
00:14:57.329   1496.902 -  1504.349:   52.2182%  (      766)
00:14:57.329   1504.349 -  1511.796:   53.2316%  (      839)
00:14:57.329   1511.796 -  1519.244:   54.2147%  (      814)
00:14:57.329   1519.244 -  1526.691:   55.1194%  (      749)
00:14:57.329   1526.691 -  1534.138:   56.0349%  (      758)
00:14:57.329   1534.138 -  1541.585:   56.8768%  (      697)
00:14:57.329   1541.585 -  1549.033:   57.7718%  (      741)
00:14:57.329   1549.033 -  1556.480:   58.5678%  (      659)
00:14:57.329   1556.480 -  1563.927:   59.4567%  (      736)
00:14:57.329   1563.927 -  1571.375:   60.3203%  (      715)
00:14:57.329   1571.375 -  1578.822:   61.1634%  (      698)
00:14:57.329   1578.822 -  1586.269:   62.0318%  (      719)
00:14:57.329   1586.269 -  1593.716:   62.9788%  (      784)
00:14:57.329   1593.716 -  1601.164:   63.8001%  (      680)
00:14:57.329   1601.164 -  1608.611:   64.6625%  (      714)
00:14:57.329   1608.611 -  1616.058:   65.4705%  (      669)
00:14:57.329   1616.058 -  1623.505:   66.3257%  (      708)
00:14:57.329   1623.505 -  1630.953:   67.1156%  (      654)
00:14:57.329   1630.953 -  1638.400:   67.8970%  (      647)
00:14:57.329   1638.400 -  1645.847:   68.6326%  (      609)
00:14:57.329   1645.847 -  1653.295:   69.3706%  (      611)
00:14:57.329   1653.295 -  1660.742:   70.1581%  (      652)
00:14:57.329   1660.742 -  1668.189:   71.0084%  (      704)
00:14:57.329   1668.189 -  1675.636:   71.8020%  (      657)
00:14:57.329   1675.636 -  1683.084:   72.5569%  (      625)
00:14:57.329   1683.084 -  1690.531:   73.3492%  (      656)
00:14:57.329   1690.531 -  1697.978:   74.1210%  (      639)
00:14:57.329   1697.978 -  1705.425:   74.8566%  (      609)
00:14:57.329   1705.425 -  1712.873:   75.6332%  (      643)
00:14:57.329   1712.873 -  1720.320:   76.3724%  (      612)
00:14:57.329   1720.320 -  1727.767:   76.9956%  (      516)
00:14:57.329   1727.767 -  1735.215:   77.6382%  (      532)
00:14:57.329   1735.215 -  1742.662:   78.2735%  (      526)
00:14:57.329   1742.662 -  1750.109:   78.9076%  (      525)
00:14:57.329   1750.109 -  1757.556:   79.4983%  (      489)
00:14:57.329   1757.556 -  1765.004:   80.1022%  (      500)
00:14:57.329   1765.004 -  1772.451:   80.6505%  (      454)
00:14:57.329   1772.451 -  1779.898:   81.1832%  (      441)
00:14:57.329   1779.898 -  1787.345:   81.7038%  (      431)
00:14:57.329   1787.345 -  1794.793:   82.2292%  (      435)
00:14:57.329   1794.793 -  1802.240:   82.7099%  (      398)
00:14:57.329   1802.240 -  1809.687:   83.2244%  (      426)
00:14:57.329   1809.687 -  1817.135:   83.6870%  (      383)
00:14:57.329   1817.135 -  1824.582:   84.1677%  (      398)
00:14:57.329   1824.582 -  1832.029:   84.6086%  (      365)
00:14:57.329   1832.029 -  1839.476:   85.0120%  (      334)
00:14:57.329   1839.476 -  1846.924:   85.4203%  (      338)
00:14:57.329   1846.924 -  1854.371:   85.8201%  (      331)
00:14:57.329   1854.371 -  1861.818:   86.2090%  (      322)
00:14:57.329   1861.818 -  1869.265:   86.5786%  (      306)
00:14:57.329   1869.265 -  1876.713:   86.9288%  (      290)
00:14:57.329   1876.713 -  1884.160:   87.3069%  (      313)
00:14:57.329   1884.160 -  1891.607:   87.6548%  (      288)
00:14:57.329   1891.607 -  1899.055:   88.0328%  (      313)
00:14:57.329   1899.055 -  1906.502:   88.3674%  (      277)
00:14:57.329   1906.502 -  1921.396:   89.0063%  (      529)
00:14:57.329   1921.396 -  1936.291:   89.5547%  (      454)
00:14:57.329   1936.291 -  1951.185:   90.1139%  (      463)
00:14:57.329   1951.185 -  1966.080:   90.6526%  (      446)
00:14:57.329   1966.080 -  1980.975:   91.1623%  (      422)
00:14:57.329   1980.975 -  1995.869:   91.6358%  (      392)
00:14:57.329   1995.869 -  2010.764:   92.0670%  (      357)
00:14:57.329   2010.764 -  2025.658:   92.4740%  (      337)
00:14:57.329   2025.658 -  2040.553:   92.8786%  (      335)
00:14:57.329   2040.553 -  2055.447:   93.2337%  (      294)
00:14:57.329   2055.447 -  2070.342:   93.6057%  (      308)
00:14:57.329   2070.342 -  2085.236:   93.9149%  (      256)
00:14:57.329   2085.236 -  2100.131:   94.2266%  (      258)
00:14:57.329   2100.131 -  2115.025:   94.5177%  (      241)
00:14:57.329   2115.025 -  2129.920:   94.8148%  (      246)
00:14:57.329   2129.920 -  2144.815:   95.0877%  (      226)
00:14:57.329   2144.815 -  2159.709:   95.3547%  (      221)
00:14:57.329   2159.709 -  2174.604:   95.5661%  (      175)
00:14:57.329   2174.604 -  2189.498:   95.7919%  (      187)
00:14:57.329   2189.498 -  2204.393:   96.0383%  (      204)
00:14:57.329   2204.393 -  2219.287:   96.2569%  (      181)
00:14:57.329   2219.287 -  2234.182:   96.4671%  (      174)
00:14:57.329   2234.182 -  2249.076:   96.6833%  (      179)
00:14:57.329   2249.076 -  2263.971:   96.8645%  (      150)
00:14:57.329   2263.971 -  2278.865:   97.0613%  (      163)
00:14:57.329   2278.865 -  2293.760:   97.2075%  (      121)
00:14:57.329   2293.760 -  2308.655:   97.3428%  (      112)
00:14:57.329   2308.655 -  2323.549:   97.4877%  (      120)
00:14:57.329   2323.549 -  2338.444:   97.6145%  (      105)
00:14:57.329   2338.444 -  2353.338:   97.7353%  (      100)
00:14:57.329   2353.338 -  2368.233:   97.8392%  (       86)
00:14:57.329   2368.233 -  2383.127:   97.9576%  (       98)
00:14:57.329   2383.127 -  2398.022:   98.0699%  (       93)
00:14:57.329   2398.022 -  2412.916:   98.1689%  (       82)
00:14:57.329   2412.916 -  2427.811:   98.2680%  (       82)
00:14:57.329   2427.811 -  2442.705:   98.3537%  (       71)
00:14:57.329   2442.705 -  2457.600:   98.4226%  (       57)
00:14:57.329   2457.600 -  2472.495:   98.4793%  (       47)
00:14:57.329   2472.495 -  2487.389:   98.5530%  (       61)
00:14:57.329   2487.389 -  2502.284:   98.6086%  (       46)
00:14:57.329   2502.284 -  2517.178:   98.6605%  (       43)
00:14:57.329   2517.178 -  2532.073:   98.7149%  (       45)
00:14:57.329   2532.073 -  2546.967:   98.7583%  (       36)
00:14:57.329   2546.967 -  2561.862:   98.8055%  (       39)
00:14:57.329   2561.862 -  2576.756:   98.8501%  (       37)
00:14:57.329   2576.756 -  2591.651:   98.8948%  (       37)
00:14:57.329   2591.651 -  2606.545:   98.9299%  (       29)
00:14:57.329   2606.545 -  2621.440:   98.9637%  (       28)
00:14:57.329   2621.440 -  2636.335:   99.0023%  (       32)
00:14:57.329   2636.335 -  2651.229:   99.0386%  (       30)
00:14:57.329   2651.229 -  2666.124:   99.0688%  (       25)
00:14:57.329   2666.124 -  2681.018:   99.1002%  (       26)
00:14:57.329   2681.018 -  2695.913:   99.1400%  (       33)
00:14:57.329   2695.913 -  2710.807:   99.1630%  (       19)
00:14:57.329   2710.807 -  2725.702:   99.1871%  (       20)
00:14:57.329   2725.702 -  2740.596:   99.2113%  (       20)
00:14:57.329   2740.596 -  2755.491:   99.2282%  (       14)
00:14:57.329   2755.491 -  2770.385:   99.2439%  (       13)
00:14:57.329   2770.385 -  2785.280:   99.2608%  (       14)
00:14:57.329   2785.280 -  2800.175:   99.2777%  (       14)
00:14:57.329   2800.175 -  2815.069:   99.2934%  (       13)
00:14:57.329   2815.069 -  2829.964:   99.3043%  (        9)
00:14:57.329   2829.964 -  2844.858:   99.3212%  (       14)
00:14:57.329   2844.858 -  2859.753:   99.3635%  (       35)
00:14:57.329   2859.753 -  2874.647:   99.3816%  (       15)
00:14:57.329   2874.647 -  2889.542:   99.3961%  (       12)
00:14:57.329   2889.542 -  2904.436:   99.4070%  (        9)
00:14:57.329   2904.436 -  2919.331:   99.4227%  (       13)
00:14:57.329   2919.331 -  2934.225:   99.4347%  (       10)
00:14:57.330   2934.225 -  2949.120:   99.4456%  (        9)
00:14:57.330   2949.120 -  2964.015:   99.4577%  (       10)
00:14:57.330   2964.015 -  2978.909:   99.4649%  (        6)
00:14:57.330   2978.909 -  2993.804:   99.4746%  (        8)
00:14:57.330   2993.804 -  3008.698:   99.4818%  (        6)
00:14:57.330   3008.698 -  3023.593:   99.4867%  (        4)
00:14:57.330   3023.593 -  3038.487:   99.4915%  (        4)
00:14:57.330   3038.487 -  3053.382:   99.4975%  (        5)
00:14:57.330   3053.382 -  3068.276:   99.5048%  (        6)
00:14:57.330   3068.276 -  3083.171:   99.5096%  (        4)
00:14:57.330   3083.171 -  3098.065:   99.5132%  (        3)
00:14:57.330   3098.065 -  3112.960:   99.5157%  (        2)
00:14:57.330   3112.960 -  3127.855:   99.5181%  (        2)
00:14:57.330   3127.855 -  3142.749:   99.5205%  (        2)
00:14:57.330   3142.749 -  3157.644:   99.5253%  (        4)
00:14:57.330   3157.644 -  3172.538:   99.5289%  (        3)
00:14:57.330   3172.538 -  3187.433:   99.5338%  (        4)
00:14:57.330   3187.433 -  3202.327:   99.5362%  (        2)
00:14:57.330   3202.327 -  3217.222:   99.5422%  (        5)
00:14:57.330   3217.222 -  3232.116:   99.5483%  (        5)
00:14:57.330   3232.116 -  3247.011:   99.5531%  (        4)
00:14:57.330   3247.011 -  3261.905:   99.5579%  (        4)
00:14:57.330   3261.905 -  3276.800:   99.5652%  (        6)
00:14:57.330   3276.800 -  3291.695:   99.5700%  (        4)
00:14:57.330   3291.695 -  3306.589:   99.5761%  (        5)
00:14:57.330   3306.589 -  3321.484:   99.5797%  (        3)
00:14:57.330   3321.484 -  3336.378:   99.5869%  (        6)
00:14:57.330   3336.378 -  3351.273:   99.5942%  (        6)
00:14:57.330   3351.273 -  3366.167:   99.6014%  (        6)
00:14:57.330   3366.167 -  3381.062:   99.6075%  (        5)
00:14:57.330   3381.062 -  3395.956:   99.6123%  (        4)
00:14:57.330   3395.956 -  3410.851:   99.6159%  (        3)
00:14:57.330   3410.851 -  3425.745:   99.6219%  (        5)
00:14:57.330   3425.745 -  3440.640:   99.6280%  (        5)
00:14:57.330   3440.640 -  3455.535:   99.6328%  (        4)
00:14:57.330   3455.535 -  3470.429:   99.6413%  (        7)
00:14:57.330   3470.429 -  3485.324:   99.6473%  (        5)
00:14:57.330   3485.324 -  3500.218:   99.6521%  (        4)
00:14:57.330   3500.218 -  3515.113:   99.6570%  (        4)
00:14:57.330   3515.113 -  3530.007:   99.6582%  (        1)
00:14:57.330   3530.007 -  3544.902:   99.6666%  (        7)
00:14:57.330   3544.902 -  3559.796:   99.6703%  (        3)
00:14:57.330   3559.796 -  3574.691:   99.6739%  (        3)
00:14:57.330   3574.691 -  3589.585:   99.6811%  (        6)
00:14:57.330   3589.585 -  3604.480:   99.6848%  (        3)
00:14:57.330   3604.480 -  3619.375:   99.6896%  (        4)
00:14:57.330   3619.375 -  3634.269:   99.6956%  (        5)
00:14:57.330   3634.269 -  3649.164:   99.7017%  (        5)
00:14:57.330   3649.164 -  3664.058:   99.7089%  (        6)
00:14:57.330   3664.058 -  3678.953:   99.7162%  (        6)
00:14:57.330   3678.953 -  3693.847:   99.7210%  (        4)
00:14:57.330   3693.847 -  3708.742:   99.7234%  (        2)
00:14:57.330   3708.742 -  3723.636:   99.7258%  (        2)
00:14:57.330   3723.636 -  3738.531:   99.7319%  (        5)
00:14:57.330   3738.531 -  3753.425:   99.7343%  (        2)
00:14:57.330   3753.425 -  3768.320:   99.7403%  (        5)
00:14:57.330   3768.320 -  3783.215:   99.7427%  (        2)
00:14:57.330   3783.215 -  3798.109:   99.7451%  (        2)
00:14:57.330   3798.109 -  3813.004:   99.7500%  (        4)
00:14:57.330   3813.004 -  3842.793:   99.7548%  (        4)
00:14:57.330   3842.793 -  3872.582:   99.7584%  (        3)
00:14:57.330   3872.582 -  3902.371:   99.7621%  (        3)
00:14:57.330   3902.371 -  3932.160:   99.7633%  (        1)
00:14:57.330   3932.160 -  3961.949:   99.7645%  (        1)
00:14:57.330   3961.949 -  3991.738:   99.7657%  (        1)
00:14:57.330   3991.738 -  4021.527:   99.7681%  (        2)
00:14:57.330   4021.527 -  4051.316:   99.7705%  (        2)
00:14:57.330   4081.105 -  4110.895:   99.7729%  (        2)
00:14:57.330   4110.895 -  4140.684:   99.7778%  (        4)
00:14:57.330   4140.684 -  4170.473:   99.7790%  (        1)
00:14:57.330   4170.473 -  4200.262:   99.7802%  (        1)
00:14:57.330   4200.262 -  4230.051:   99.7838%  (        3)
00:14:57.330   4230.051 -  4259.840:   99.7886%  (        4)
00:14:57.330   4259.840 -  4289.629:   99.7935%  (        4)
00:14:57.330   4289.629 -  4319.418:   99.7971%  (        3)
00:14:57.330   4319.418 -  4349.207:   99.8031%  (        5)
00:14:57.330   4349.207 -  4378.996:   99.8067%  (        3)
00:14:57.330   4378.996 -  4408.785:   99.8104%  (        3)
00:14:57.330   4408.785 -  4438.575:   99.8116%  (        1)
00:14:57.330   4438.575 -  4468.364:   99.8140%  (        2)
00:14:57.330   4468.364 -  4498.153:   99.8164%  (        2)
00:14:57.330   4498.153 -  4527.942:   99.8176%  (        1)
00:14:57.330   4527.942 -  4557.731:   99.8188%  (        1)
00:14:57.330   4557.731 -  4587.520:   99.8200%  (        1)
00:14:57.330   4587.520 -  4617.309:   99.8212%  (        1)
00:14:57.330   4617.309 -  4647.098:   99.8237%  (        2)
00:14:57.330   4647.098 -  4676.887:   99.8273%  (        3)
00:14:57.330   4676.887 -  4706.676:   99.8297%  (        2)
00:14:57.330   4706.676 -  4736.465:   99.8309%  (        1)
00:14:57.330   4766.255 -  4796.044:   99.8333%  (        2)
00:14:57.330   4796.044 -  4825.833:   99.8345%  (        1)
00:14:57.330   4944.989 -  4974.778:   99.8369%  (        2)
00:14:57.330   5034.356 -  5064.145:   99.8382%  (        1)
00:14:57.330   5064.145 -  5093.935:   99.8394%  (        1)
00:14:57.330   5093.935 -  5123.724:   99.8406%  (        1)
00:14:57.330   5123.724 -  5153.513:   99.8418%  (        1)
00:14:57.330   5183.302 -  5213.091:   99.8430%  (        1)
00:14:57.330   5213.091 -  5242.880:   99.8442%  (        1)
00:14:57.330   5332.247 -  5362.036:   99.8454%  (        1)
00:14:57.330   5510.982 -  5540.771:   99.8466%  (        1)
00:14:57.330   6076.975 -  6106.764:   99.8478%  (        1)
00:14:57.330   6106.764 -  6136.553:   99.8490%  (        1)
00:14:57.330   6196.131 -  6225.920:   99.8502%  (        1)
00:14:57.330   6374.865 -  6404.655:   99.8514%  (        1)
00:14:57.330   6434.444 -  6464.233:   99.8539%  (        2)
00:14:57.330   6464.233 -  6494.022:   99.8551%  (        1)
00:14:57.330   6672.756 -  6702.545:   99.8587%  (        3)
00:14:57.330   6762.124 -  6791.913:   99.8611%  (        2)
00:14:57.330   6791.913 -  6821.702:   99.8623%  (        1)
00:14:57.330   6940.858 -  6970.647:   99.8635%  (        1)
00:14:57.330   7089.804 -  7119.593:   99.8647%  (        1)
00:14:57.330   7119.593 -  7149.382:   99.8683%  (        3)
00:14:57.330   7149.382 -  7179.171:   99.8696%  (        1)
00:14:57.330   7179.171 -  7208.960:   99.8708%  (        1)
00:14:57.330   7208.960 -  7238.749:   99.8720%  (        1)
00:14:57.330   7626.007 -  7685.585:   99.8732%  (        1)
00:14:57.330   8221.789 -  8281.367:   99.8744%  (        1)
00:14:57.330   8519.680 -  8579.258:   99.8756%  (        1)
00:14:57.330   8579.258 -  8638.836:   99.8768%  (        1)
00:14:57.330   8698.415 -  8757.993:   99.8780%  (        1)
00:14:57.330   8757.993 -  8817.571:   99.8804%  (        2)
00:14:57.330   8817.571 -  8877.149:   99.8828%  (        2)
00:14:57.330   8936.727 -  8996.305:   99.8877%  (        4)
00:14:57.330   8996.305 -  9055.884:   99.8889%  (        1)
00:14:57.330   9055.884 -  9115.462:   99.8925%  (        3)
00:14:57.330   9115.462 -  9175.040:   99.8937%  (        1)
00:14:57.330   9175.040 -  9234.618:   99.8949%  (        1)
00:14:57.330   9294.196 -  9353.775:   99.8985%  (        3)
00:14:57.330   9353.775 -  9413.353:   99.9022%  (        3)
00:14:57.330   9413.353 -  9472.931:   99.9082%  (        5)
00:14:57.330   9472.931 -  9532.509:   99.9142%  (        5)
00:14:57.330   9532.509 -  9592.087:   99.9167%  (        2)
00:14:57.330   9592.087 -  9651.665:   99.9227%  (        5)
00:14:57.330   9651.665 -  9711.244:   99.9263%  (        3)
00:14:57.330   9711.244 -  9770.822:   99.9287%  (        2)
00:14:57.330   9770.822 -  9830.400:   99.9336%  (        4)
00:14:57.330   9830.400 -  9889.978:   99.9360%  (        2)
00:14:57.330   9889.978 -  9949.556:   99.9408%  (        4)
00:14:57.330   9949.556 - 10009.135:   99.9444%  (        3)
00:14:57.330  10009.135 - 10068.713:   99.9481%  (        3)
00:14:57.330  10068.713 - 10128.291:   99.9529%  (        4)
00:14:57.330  10128.291 - 10187.869:   99.9541%  (        1)
00:14:57.330  10187.869 - 10247.447:   99.9553%  (        1)
00:14:57.330  10247.447 - 10307.025:   99.9565%  (        1)
00:14:57.330  10307.025 - 10366.604:   99.9577%  (        1)
00:14:57.330  10366.604 - 10426.182:   99.9613%  (        3)
00:14:57.330  10604.916 - 10664.495:   99.9626%  (        1)
00:14:57.330  10724.073 - 10783.651:   99.9662%  (        3)
00:14:57.330  10783.651 - 10843.229:   99.9674%  (        1)
00:14:57.330  10902.807 - 10962.385:   99.9686%  (        1)
00:14:57.330  10962.385 - 11021.964:   99.9710%  (        2)
00:14:57.330  11021.964 - 11081.542:   99.9758%  (        4)
00:14:57.330  11081.542 - 11141.120:   99.9771%  (        1)
00:14:57.330  11141.120 - 11200.698:   99.9807%  (        3)
00:14:57.330  11200.698 - 11260.276:   99.9867%  (        5)
00:14:57.330  11260.276 - 11319.855:   99.9891%  (        2)
00:14:57.330  11319.855 - 11379.433:   99.9903%  (        1)
00:14:57.330  11379.433 - 11439.011:  100.0000%  (        8)
00:14:57.330  
00:14:57.330   05:05:10 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:14:57.330  
00:14:57.330  real	0m2.594s
00:14:57.330  user	0m2.207s
00:14:57.330  sys	0m0.229s
00:14:57.330   05:05:10 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:57.330  ************************************
00:14:57.330   05:05:10 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:14:57.330  END TEST nvme_perf
00:14:57.330  ************************************
00:14:57.330   05:05:10 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:14:57.330   05:05:10 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:14:57.330   05:05:10 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:57.330   05:05:10 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:57.330  ************************************
00:14:57.330  START TEST nvme_hello_world
00:14:57.330  ************************************
00:14:57.330   05:05:10 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:14:57.330  Initializing NVMe Controllers
00:14:57.330  Attached to 0000:00:10.0
00:14:57.330    Namespace ID: 1 size: 5GB
00:14:57.330  Initialization complete.
00:14:57.330  INFO: using host memory buffer for IO
00:14:57.330  Hello world!
00:14:57.330  
00:14:57.330  real	0m0.310s
00:14:57.330  user	0m0.123s
00:14:57.331  sys	0m0.110s
00:14:57.331   05:05:11 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:57.331   05:05:11 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:14:57.331  ************************************
00:14:57.331  END TEST nvme_hello_world
00:14:57.331  ************************************
00:14:57.589   05:05:11 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:14:57.589   05:05:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:57.589   05:05:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:57.589   05:05:11 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:57.589  ************************************
00:14:57.589  START TEST nvme_sgl
00:14:57.589  ************************************
00:14:57.589   05:05:11 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:14:57.848  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:14:57.848  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:14:57.848  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:14:57.848  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:14:57.848  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:14:57.848  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:14:57.848  NVMe Readv/Writev Request test
00:14:57.848  Attached to 0000:00:10.0
00:14:57.848  0000:00:10.0: build_io_request_2 test passed
00:14:57.848  0000:00:10.0: build_io_request_4 test passed
00:14:57.848  0000:00:10.0: build_io_request_5 test passed
00:14:57.848  0000:00:10.0: build_io_request_6 test passed
00:14:57.848  0000:00:10.0: build_io_request_7 test passed
00:14:57.848  0000:00:10.0: build_io_request_10 test passed
00:14:57.848  Cleaning up...
00:14:57.848  
00:14:57.848  real	0m0.363s
00:14:57.848  user	0m0.135s
00:14:57.848  sys	0m0.144s
00:14:57.848   05:05:11 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:57.848   05:05:11 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:14:57.848  ************************************
00:14:57.848  END TEST nvme_sgl
00:14:57.848  ************************************
00:14:57.848   05:05:11 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:14:57.848   05:05:11 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:57.848   05:05:11 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:57.848   05:05:11 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:57.848  ************************************
00:14:57.848  START TEST nvme_e2edp
00:14:57.848  ************************************
00:14:57.848   05:05:11 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:14:58.107  NVMe Write/Read with End-to-End data protection test
00:14:58.107  Attached to 0000:00:10.0
00:14:58.107  Cleaning up...
00:14:58.107  
00:14:58.107  real	0m0.306s
00:14:58.107  user	0m0.135s
00:14:58.107  sys	0m0.105s
00:14:58.107   05:05:12 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:58.107   05:05:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:14:58.107  ************************************
00:14:58.107  END TEST nvme_e2edp
00:14:58.107  ************************************
00:14:58.107   05:05:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:14:58.107   05:05:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:58.107   05:05:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:58.107   05:05:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:58.107  ************************************
00:14:58.107  START TEST nvme_reserve
00:14:58.107  ************************************
00:14:58.107   05:05:12 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:14:58.674  =====================================================
00:14:58.674  NVMe Controller at PCI bus 0, device 16, function 0
00:14:58.674  =====================================================
00:14:58.674  Reservations:                Not Supported
00:14:58.674  Reservation test passed
00:14:58.674  
00:14:58.674  real	0m0.290s
00:14:58.674  user	0m0.120s
00:14:58.674  sys	0m0.111s
00:14:58.674   05:05:12 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:58.674   05:05:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:14:58.674  ************************************
00:14:58.674  END TEST nvme_reserve
00:14:58.674  ************************************
00:14:58.674   05:05:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:14:58.674   05:05:12 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:14:58.674   05:05:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:58.674   05:05:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:58.674  ************************************
00:14:58.674  START TEST nvme_err_injection
00:14:58.674  ************************************
00:14:58.674   05:05:12 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:14:58.932  NVMe Error Injection test
00:14:58.932  Attached to 0000:00:10.0
00:14:58.932  0000:00:10.0: get features failed as expected
00:14:58.932  0000:00:10.0: get features successfully as expected
00:14:58.932  0000:00:10.0: read failed as expected
00:14:58.932  0000:00:10.0: read successfully as expected
00:14:58.932  Cleaning up...
00:14:58.932  
00:14:58.932  real	0m0.347s
00:14:58.932  user	0m0.126s
00:14:58.932  sys	0m0.108s
00:14:58.932   05:05:12 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:14:58.932   05:05:12 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:14:58.932  ************************************
00:14:58.932  END TEST nvme_err_injection
00:14:58.932  ************************************
00:14:58.932   05:05:12 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:14:58.932   05:05:12 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:14:58.932   05:05:12 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:14:58.932   05:05:12 nvme -- common/autotest_common.sh@10 -- # set +x
00:14:58.932  ************************************
00:14:58.932  START TEST nvme_overhead
00:14:58.932  ************************************
00:14:58.932   05:05:12 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:15:00.310  Initializing NVMe Controllers
00:15:00.310  Attached to 0000:00:10.0
00:15:00.310  Initialization complete. Launching workers.
00:15:00.310  submit (in ns)   avg, min, max =  15023.0,  10417.3,  50027.3
00:15:00.310  complete (in ns) avg, min, max =   9445.5,   7305.5, 140179.1
00:15:00.310  
00:15:00.310  Submit histogram
00:15:00.310  ================
00:15:00.310         Range in us     Cumulative     Count
00:15:00.310     10.415 -    10.473:    0.0121%  (        1)
00:15:00.310     11.113 -    11.171:    0.0243%  (        1)
00:15:00.310     11.636 -    11.695:    0.1214%  (        8)
00:15:00.310     11.695 -    11.753:    0.4005%  (       23)
00:15:00.310     11.753 -    11.811:    1.2379%  (       69)
00:15:00.310     11.811 -    11.869:    2.6214%  (      114)
00:15:00.310     11.869 -    11.927:    4.8301%  (      182)
00:15:00.310     11.927 -    11.985:    7.3786%  (      210)
00:15:00.310     11.985 -    12.044:   10.1214%  (      226)
00:15:00.310     12.044 -    12.102:   13.1432%  (      249)
00:15:00.310     12.102 -    12.160:   17.3786%  (      349)
00:15:00.310     12.160 -    12.218:   22.1481%  (      393)
00:15:00.310     12.218 -    12.276:   26.3956%  (      350)
00:15:00.310     12.276 -    12.335:   29.8786%  (      287)
00:15:00.310     12.335 -    12.393:   33.4830%  (      297)
00:15:00.310     12.393 -    12.451:   37.1238%  (      300)
00:15:00.310     12.451 -    12.509:   40.6068%  (      287)
00:15:00.310     12.509 -    12.567:   44.0413%  (      283)
00:15:00.310     12.567 -    12.625:   46.6990%  (      219)
00:15:00.310     12.625 -    12.684:   48.7621%  (      170)
00:15:00.310     12.684 -    12.742:   50.8617%  (      173)
00:15:00.310     12.742 -    12.800:   52.8155%  (      161)
00:15:00.310     12.800 -    12.858:   54.9150%  (      173)
00:15:00.310     12.858 -    12.916:   56.2500%  (      110)
00:15:00.310     12.916 -    12.975:   57.3422%  (       90)
00:15:00.310     12.975 -    13.033:   58.2403%  (       74)
00:15:00.310     13.033 -    13.091:   58.8350%  (       49)
00:15:00.310     13.091 -    13.149:   59.4417%  (       50)
00:15:00.310     13.149 -    13.207:   60.0243%  (       48)
00:15:00.310     13.207 -    13.265:   60.4976%  (       39)
00:15:00.310     13.265 -    13.324:   60.8738%  (       31)
00:15:00.310     13.324 -    13.382:   61.1893%  (       26)
00:15:00.310     13.382 -    13.440:   61.4563%  (       22)
00:15:00.310     13.440 -    13.498:   61.6748%  (       18)
00:15:00.310     13.498 -    13.556:   61.8447%  (       14)
00:15:00.310     13.556 -    13.615:   62.0995%  (       21)
00:15:00.310     13.615 -    13.673:   62.2209%  (       10)
00:15:00.310     13.673 -    13.731:   62.4029%  (       15)
00:15:00.310     13.731 -    13.789:   62.5364%  (       11)
00:15:00.310     13.789 -    13.847:   62.6092%  (        6)
00:15:00.310     13.847 -    13.905:   62.6942%  (        7)
00:15:00.310     13.905 -    13.964:   62.7913%  (        8)
00:15:00.310     13.964 -    14.022:   62.8277%  (        3)
00:15:00.310     14.022 -    14.080:   62.8762%  (        4)
00:15:00.310     14.080 -    14.138:   62.9490%  (        6)
00:15:00.310     14.138 -    14.196:   62.9854%  (        3)
00:15:00.310     14.196 -    14.255:   63.0218%  (        3)
00:15:00.310     14.255 -    14.313:   63.0583%  (        3)
00:15:00.310     14.313 -    14.371:   63.0825%  (        2)
00:15:00.310     14.371 -    14.429:   63.0947%  (        1)
00:15:00.310     14.429 -    14.487:   63.1189%  (        2)
00:15:00.310     14.487 -    14.545:   63.1311%  (        1)
00:15:00.310     14.545 -    14.604:   63.1553%  (        2)
00:15:00.310     14.604 -    14.662:   63.1796%  (        2)
00:15:00.310     14.662 -    14.720:   63.2039%  (        2)
00:15:00.310     14.720 -    14.778:   63.2160%  (        1)
00:15:00.310     14.836 -    14.895:   63.2524%  (        3)
00:15:00.310     14.895 -    15.011:   63.3131%  (        5)
00:15:00.310     15.011 -    15.127:   63.3374%  (        2)
00:15:00.310     15.127 -    15.244:   63.3738%  (        3)
00:15:00.310     15.244 -    15.360:   63.4102%  (        3)
00:15:00.310     15.360 -    15.476:   63.4466%  (        3)
00:15:00.310     15.476 -    15.593:   63.4587%  (        1)
00:15:00.310     15.593 -    15.709:   63.4951%  (        3)
00:15:00.310     15.709 -    15.825:   63.5194%  (        2)
00:15:00.310     15.825 -    15.942:   63.5437%  (        2)
00:15:00.310     15.942 -    16.058:   63.5680%  (        2)
00:15:00.310     16.058 -    16.175:   63.6044%  (        3)
00:15:00.310     16.175 -    16.291:   63.6529%  (        4)
00:15:00.310     16.291 -    16.407:   63.6772%  (        2)
00:15:00.310     16.407 -    16.524:   63.7015%  (        2)
00:15:00.310     16.524 -    16.640:   63.7379%  (        3)
00:15:00.310     16.640 -    16.756:   64.4660%  (       60)
00:15:00.310     16.756 -    16.873:   70.3155%  (      482)
00:15:00.310     16.873 -    16.989:   78.8592%  (      704)
00:15:00.310     16.989 -    17.105:   83.7136%  (      400)
00:15:00.310     17.105 -    17.222:   86.2743%  (      211)
00:15:00.310     17.222 -    17.338:   87.3665%  (       90)
00:15:00.310     17.338 -    17.455:   87.9248%  (       46)
00:15:00.310     17.455 -    17.571:   88.3981%  (       39)
00:15:00.310     17.571 -    17.687:   88.7500%  (       29)
00:15:00.310     17.687 -    17.804:   89.1141%  (       30)
00:15:00.310     17.804 -    17.920:   89.3325%  (       18)
00:15:00.310     17.920 -    18.036:   89.5388%  (       17)
00:15:00.310     18.036 -    18.153:   89.7573%  (       18)
00:15:00.310     18.153 -    18.269:   89.9150%  (       13)
00:15:00.310     18.269 -    18.385:   90.0728%  (       13)
00:15:00.310     18.385 -    18.502:   90.1820%  (        9)
00:15:00.310     18.502 -    18.618:   90.2913%  (        9)
00:15:00.310     18.618 -    18.735:   90.4248%  (       11)
00:15:00.310     18.735 -    18.851:   90.5340%  (        9)
00:15:00.310     18.851 -    18.967:   90.6432%  (        9)
00:15:00.310     18.967 -    19.084:   90.7039%  (        5)
00:15:00.310     19.084 -    19.200:   90.7767%  (        6)
00:15:00.310     19.200 -    19.316:   90.8131%  (        3)
00:15:00.310     19.316 -    19.433:   90.8495%  (        3)
00:15:00.310     19.433 -    19.549:   90.8859%  (        3)
00:15:00.310     19.549 -    19.665:   90.9223%  (        3)
00:15:00.310     19.665 -    19.782:   91.0073%  (        7)
00:15:00.310     19.782 -    19.898:   91.0801%  (        6)
00:15:00.311     19.898 -    20.015:   91.1044%  (        2)
00:15:00.311     20.015 -    20.131:   91.1772%  (        6)
00:15:00.311     20.131 -    20.247:   91.2015%  (        2)
00:15:00.311     20.247 -    20.364:   91.2136%  (        1)
00:15:00.311     20.364 -    20.480:   91.2257%  (        1)
00:15:00.311     20.480 -    20.596:   91.2379%  (        1)
00:15:00.311     20.596 -    20.713:   91.3107%  (        6)
00:15:00.311     20.713 -    20.829:   91.3228%  (        1)
00:15:00.311     20.829 -    20.945:   91.3350%  (        1)
00:15:00.311     20.945 -    21.062:   91.3714%  (        3)
00:15:00.311     21.062 -    21.178:   91.4927%  (       10)
00:15:00.311     21.178 -    21.295:   91.5655%  (        6)
00:15:00.311     21.411 -    21.527:   91.6383%  (        6)
00:15:00.311     21.527 -    21.644:   91.6869%  (        4)
00:15:00.311     21.644 -    21.760:   91.7718%  (        7)
00:15:00.311     21.760 -    21.876:   91.7961%  (        2)
00:15:00.311     21.876 -    21.993:   91.8325%  (        3)
00:15:00.311     21.993 -    22.109:   91.9175%  (        7)
00:15:00.311     22.109 -    22.225:   91.9782%  (        5)
00:15:00.311     22.225 -    22.342:   92.0146%  (        3)
00:15:00.311     22.342 -    22.458:   92.0752%  (        5)
00:15:00.311     22.458 -    22.575:   92.0874%  (        1)
00:15:00.311     22.575 -    22.691:   92.1117%  (        2)
00:15:00.311     22.691 -    22.807:   92.1966%  (        7)
00:15:00.311     22.807 -    22.924:   92.2694%  (        6)
00:15:00.311     22.924 -    23.040:   92.3301%  (        5)
00:15:00.311     23.040 -    23.156:   92.3786%  (        4)
00:15:00.311     23.156 -    23.273:   92.4272%  (        4)
00:15:00.311     23.273 -    23.389:   92.4757%  (        4)
00:15:00.311     23.389 -    23.505:   92.5243%  (        4)
00:15:00.311     23.505 -    23.622:   92.5607%  (        3)
00:15:00.311     23.622 -    23.738:   92.5850%  (        2)
00:15:00.311     23.738 -    23.855:   92.6092%  (        2)
00:15:00.311     23.855 -    23.971:   92.6578%  (        4)
00:15:00.311     23.971 -    24.087:   92.6942%  (        3)
00:15:00.311     24.087 -    24.204:   92.7549%  (        5)
00:15:00.311     24.204 -    24.320:   92.7791%  (        2)
00:15:00.311     24.320 -    24.436:   92.8155%  (        3)
00:15:00.311     24.436 -    24.553:   92.8398%  (        2)
00:15:00.311     24.553 -    24.669:   92.8519%  (        1)
00:15:00.311     24.669 -    24.785:   92.9005%  (        4)
00:15:00.311     24.785 -    24.902:   92.9248%  (        2)
00:15:00.311     24.902 -    25.018:   92.9490%  (        2)
00:15:00.311     25.018 -    25.135:   92.9854%  (        3)
00:15:00.311     25.251 -    25.367:   92.9976%  (        1)
00:15:00.311     25.367 -    25.484:   93.0097%  (        1)
00:15:00.311     25.484 -    25.600:   93.0461%  (        3)
00:15:00.311     25.600 -    25.716:   93.0583%  (        1)
00:15:00.311     25.833 -    25.949:   93.0947%  (        3)
00:15:00.311     25.949 -    26.065:   93.1311%  (        3)
00:15:00.311     26.065 -    26.182:   93.1553%  (        2)
00:15:00.311     26.182 -    26.298:   93.1675%  (        1)
00:15:00.311     26.298 -    26.415:   93.1917%  (        2)
00:15:00.311     26.415 -    26.531:   93.2160%  (        2)
00:15:00.311     26.531 -    26.647:   93.2767%  (        5)
00:15:00.311     26.647 -    26.764:   93.4102%  (       11)
00:15:00.311     26.764 -    26.880:   93.6529%  (       20)
00:15:00.311     26.880 -    26.996:   93.9927%  (       28)
00:15:00.311     26.996 -    27.113:   94.3325%  (       28)
00:15:00.311     27.113 -    27.229:   94.6845%  (       29)
00:15:00.311     27.229 -    27.345:   95.2427%  (       46)
00:15:00.311     27.345 -    27.462:   95.4976%  (       21)
00:15:00.311     27.462 -    27.578:   95.9587%  (       38)
00:15:00.311     27.578 -    27.695:   96.3835%  (       35)
00:15:00.311     27.695 -    27.811:   96.7961%  (       34)
00:15:00.311     27.811 -    27.927:   97.2087%  (       34)
00:15:00.311     27.927 -    28.044:   97.7184%  (       42)
00:15:00.311     28.044 -    28.160:   98.2524%  (       44)
00:15:00.311     28.160 -    28.276:   98.6529%  (       33)
00:15:00.311     28.276 -    28.393:   98.8350%  (       15)
00:15:00.311     28.393 -    28.509:   98.9563%  (       10)
00:15:00.311     28.509 -    28.625:   99.0534%  (        8)
00:15:00.311     28.625 -    28.742:   99.1383%  (        7)
00:15:00.311     28.742 -    28.858:   99.1626%  (        2)
00:15:00.311     28.858 -    28.975:   99.2597%  (        8)
00:15:00.311     28.975 -    29.091:   99.3083%  (        4)
00:15:00.311     29.091 -    29.207:   99.3932%  (        7)
00:15:00.311     29.207 -    29.324:   99.4417%  (        4)
00:15:00.311     29.324 -    29.440:   99.4660%  (        2)
00:15:00.311     29.556 -    29.673:   99.5267%  (        5)
00:15:00.311     29.673 -    29.789:   99.5388%  (        1)
00:15:00.311     29.789 -    30.022:   99.5874%  (        4)
00:15:00.311     30.022 -    30.255:   99.5995%  (        1)
00:15:00.311     30.255 -    30.487:   99.6238%  (        2)
00:15:00.311     30.487 -    30.720:   99.6359%  (        1)
00:15:00.311     30.720 -    30.953:   99.6481%  (        1)
00:15:00.311     30.953 -    31.185:   99.6602%  (        1)
00:15:00.311     31.651 -    31.884:   99.6723%  (        1)
00:15:00.311     32.116 -    32.349:   99.6845%  (        1)
00:15:00.311     32.815 -    33.047:   99.6966%  (        1)
00:15:00.311     33.513 -    33.745:   99.7209%  (        2)
00:15:00.311     33.745 -    33.978:   99.7330%  (        1)
00:15:00.311     33.978 -    34.211:   99.7573%  (        2)
00:15:00.311     34.211 -    34.444:   99.7816%  (        2)
00:15:00.311     35.142 -    35.375:   99.8301%  (        4)
00:15:00.311     35.375 -    35.607:   99.8422%  (        1)
00:15:00.311     36.073 -    36.305:   99.8665%  (        2)
00:15:00.311     37.469 -    37.702:   99.8786%  (        1)
00:15:00.311     38.633 -    38.865:   99.8908%  (        1)
00:15:00.311     38.865 -    39.098:   99.9029%  (        1)
00:15:00.311     39.564 -    39.796:   99.9150%  (        1)
00:15:00.311     41.658 -    41.891:   99.9272%  (        1)
00:15:00.311     42.124 -    42.356:   99.9393%  (        1)
00:15:00.311     42.356 -    42.589:   99.9515%  (        1)
00:15:00.311     44.916 -    45.149:   99.9636%  (        1)
00:15:00.311     46.545 -    46.778:   99.9757%  (        1)
00:15:00.311     49.804 -    50.036:  100.0000%  (        2)
00:15:00.311  
00:15:00.311  Complete histogram
00:15:00.311  ==================
00:15:00.311         Range in us     Cumulative     Count
00:15:00.311      7.302 -     7.331:    0.1092%  (        9)
00:15:00.311      7.331 -     7.360:    0.5583%  (       37)
00:15:00.311      7.360 -     7.389:    1.6990%  (       94)
00:15:00.311      7.389 -     7.418:    3.2524%  (      128)
00:15:00.311      7.418 -     7.447:    4.9515%  (      140)
00:15:00.311      7.447 -     7.505:    7.6456%  (      222)
00:15:00.311      7.505 -     7.564:   10.2549%  (      215)
00:15:00.311      7.564 -     7.622:   15.1942%  (      407)
00:15:00.311      7.622 -     7.680:   20.0607%  (      401)
00:15:00.311      7.680 -     7.738:   22.9612%  (      239)
00:15:00.311      7.738 -     7.796:   27.2694%  (      355)
00:15:00.311      7.796 -     7.855:   34.1748%  (      569)
00:15:00.311      7.855 -     7.913:   38.8350%  (      384)
00:15:00.311      7.913 -     7.971:   41.4199%  (      213)
00:15:00.311      7.971 -     8.029:   45.2306%  (      314)
00:15:00.311      8.029 -     8.087:   51.1772%  (      490)
00:15:00.311      8.087 -     8.145:   54.1990%  (      249)
00:15:00.311      8.145 -     8.204:   55.2549%  (       87)
00:15:00.311      8.204 -     8.262:   57.4636%  (      182)
00:15:00.311      8.262 -     8.320:   60.4126%  (      243)
00:15:00.311      8.320 -     8.378:   61.9539%  (      127)
00:15:00.311      8.378 -     8.436:   62.5607%  (       50)
00:15:00.311      8.436 -     8.495:   63.0825%  (       43)
00:15:00.311      8.495 -     8.553:   63.9078%  (       68)
00:15:00.311      8.553 -     8.611:   64.4539%  (       45)
00:15:00.311      8.611 -     8.669:   64.7330%  (       23)
00:15:00.311      8.669 -     8.727:   64.9757%  (       20)
00:15:00.311      8.727 -     8.785:   65.1578%  (       15)
00:15:00.311      8.785 -     8.844:   65.5461%  (       32)
00:15:00.311      8.844 -     8.902:   65.7646%  (       18)
00:15:00.311      8.902 -     8.960:   65.8859%  (       10)
00:15:00.311      8.960 -     9.018:   66.0437%  (       13)
00:15:00.311      9.018 -     9.076:   66.2257%  (       15)
00:15:00.311      9.076 -     9.135:   66.2864%  (        5)
00:15:00.311      9.135 -     9.193:   66.3956%  (        9)
00:15:00.311      9.193 -     9.251:   66.5049%  (        9)
00:15:00.311      9.251 -     9.309:   66.6505%  (       12)
00:15:00.311      9.309 -     9.367:   66.7112%  (        5)
00:15:00.311      9.367 -     9.425:   66.8325%  (       10)
00:15:00.311      9.425 -     9.484:   66.9782%  (       12)
00:15:00.311      9.484 -     9.542:   67.0995%  (       10)
00:15:00.311      9.542 -     9.600:   67.2087%  (        9)
00:15:00.311      9.600 -     9.658:   67.3058%  (        8)
00:15:00.311      9.658 -     9.716:   67.4029%  (        8)
00:15:00.311      9.716 -     9.775:   67.4636%  (        5)
00:15:00.311      9.775 -     9.833:   67.4757%  (        1)
00:15:00.311      9.833 -     9.891:   67.5000%  (        2)
00:15:00.311      9.891 -     9.949:   67.5243%  (        2)
00:15:00.311      9.949 -    10.007:   67.5364%  (        1)
00:15:00.311     10.007 -    10.065:   67.5971%  (        5)
00:15:00.311     10.065 -    10.124:   67.6214%  (        2)
00:15:00.311     10.124 -    10.182:   67.6699%  (        4)
00:15:00.311     10.182 -    10.240:   67.6942%  (        2)
00:15:00.311     10.298 -    10.356:   67.7184%  (        2)
00:15:00.311     10.356 -    10.415:   67.7427%  (        2)
00:15:00.311     10.415 -    10.473:   67.7549%  (        1)
00:15:00.311     10.473 -    10.531:   67.7670%  (        1)
00:15:00.311     10.531 -    10.589:   67.7791%  (        1)
00:15:00.311     10.589 -    10.647:   67.7913%  (        1)
00:15:00.311     10.822 -    10.880:   68.3859%  (       49)
00:15:00.311     10.880 -    10.938:   78.6408%  (      845)
00:15:00.311     10.938 -    10.996:   88.4830%  (      811)
00:15:00.311     10.996 -    11.055:   91.8204%  (      275)
00:15:00.311     11.055 -    11.113:   92.9126%  (       90)
00:15:00.311     11.113 -    11.171:   93.3252%  (       34)
00:15:00.311     11.171 -    11.229:   93.5558%  (       19)
00:15:00.311     11.229 -    11.287:   93.6044%  (        4)
00:15:00.311     11.287 -    11.345:   93.6529%  (        4)
00:15:00.311     11.345 -    11.404:   93.6893%  (        3)
00:15:00.311     11.404 -    11.462:   93.7985%  (        9)
00:15:00.311     11.462 -    11.520:   93.9320%  (       11)
00:15:00.311     11.520 -    11.578:   94.0898%  (       13)
00:15:00.311     11.578 -    11.636:   94.2840%  (       16)
00:15:00.311     11.636 -    11.695:   94.3447%  (        5)
00:15:00.311     11.695 -    11.753:   94.3811%  (        3)
00:15:00.311     11.753 -    11.811:   94.4175%  (        3)
00:15:00.312     11.811 -    11.869:   94.5510%  (       11)
00:15:00.312     11.869 -    11.927:   94.6238%  (        6)
00:15:00.312     11.927 -    11.985:   94.7209%  (        8)
00:15:00.312     11.985 -    12.044:   94.7573%  (        3)
00:15:00.312     12.044 -    12.102:   94.8301%  (        6)
00:15:00.312     12.102 -    12.160:   94.9029%  (        6)
00:15:00.312     12.160 -    12.218:   94.9515%  (        4)
00:15:00.312     12.218 -    12.276:   94.9879%  (        3)
00:15:00.312     12.276 -    12.335:   95.0364%  (        4)
00:15:00.312     12.335 -    12.393:   95.0485%  (        1)
00:15:00.312     12.393 -    12.451:   95.0728%  (        2)
00:15:00.312     12.451 -    12.509:   95.0850%  (        1)
00:15:00.312     12.509 -    12.567:   95.0971%  (        1)
00:15:00.312     12.567 -    12.625:   95.1092%  (        1)
00:15:00.312     12.625 -    12.684:   95.1335%  (        2)
00:15:00.312     12.684 -    12.742:   95.1699%  (        3)
00:15:00.312     12.800 -    12.858:   95.1820%  (        1)
00:15:00.312     12.916 -    12.975:   95.2063%  (        2)
00:15:00.312     12.975 -    13.033:   95.2427%  (        3)
00:15:00.312     13.033 -    13.091:   95.2913%  (        4)
00:15:00.312     13.091 -    13.149:   95.3155%  (        2)
00:15:00.312     13.149 -    13.207:   95.3519%  (        3)
00:15:00.312     13.207 -    13.265:   95.3883%  (        3)
00:15:00.312     13.265 -    13.324:   95.4005%  (        1)
00:15:00.312     13.324 -    13.382:   95.4126%  (        1)
00:15:00.312     13.382 -    13.440:   95.4369%  (        2)
00:15:00.312     13.440 -    13.498:   95.4976%  (        5)
00:15:00.312     13.498 -    13.556:   95.5340%  (        3)
00:15:00.312     13.556 -    13.615:   95.5947%  (        5)
00:15:00.312     13.615 -    13.673:   95.6189%  (        2)
00:15:00.312     13.673 -    13.731:   95.6432%  (        2)
00:15:00.312     13.731 -    13.789:   95.6675%  (        2)
00:15:00.312     13.847 -    13.905:   95.6917%  (        2)
00:15:00.312     13.905 -    13.964:   95.7039%  (        1)
00:15:00.312     13.964 -    14.022:   95.7403%  (        3)
00:15:00.312     14.196 -    14.255:   95.7646%  (        2)
00:15:00.312     14.255 -    14.313:   95.7888%  (        2)
00:15:00.312     14.313 -    14.371:   95.8131%  (        2)
00:15:00.312     14.371 -    14.429:   95.8252%  (        1)
00:15:00.312     14.429 -    14.487:   95.8374%  (        1)
00:15:00.312     14.487 -    14.545:   95.8495%  (        1)
00:15:00.312     14.604 -    14.662:   95.8617%  (        1)
00:15:00.312     14.662 -    14.720:   95.8981%  (        3)
00:15:00.312     14.720 -    14.778:   95.9102%  (        1)
00:15:00.312     14.895 -    15.011:   95.9345%  (        2)
00:15:00.312     15.127 -    15.244:   95.9951%  (        5)
00:15:00.312     15.244 -    15.360:   96.0316%  (        3)
00:15:00.312     15.360 -    15.476:   96.0437%  (        1)
00:15:00.312     15.476 -    15.593:   96.0680%  (        2)
00:15:00.312     15.593 -    15.709:   96.0922%  (        2)
00:15:00.312     15.709 -    15.825:   96.1044%  (        1)
00:15:00.312     15.825 -    15.942:   96.1165%  (        1)
00:15:00.312     15.942 -    16.058:   96.1286%  (        1)
00:15:00.312     16.058 -    16.175:   96.1529%  (        2)
00:15:00.312     16.175 -    16.291:   96.1650%  (        1)
00:15:00.312     16.291 -    16.407:   96.1772%  (        1)
00:15:00.312     16.524 -    16.640:   96.2015%  (        2)
00:15:00.312     16.640 -    16.756:   96.2136%  (        1)
00:15:00.312     16.989 -    17.105:   96.2500%  (        3)
00:15:00.312     17.105 -    17.222:   96.2743%  (        2)
00:15:00.312     17.222 -    17.338:   96.2985%  (        2)
00:15:00.312     17.338 -    17.455:   96.3107%  (        1)
00:15:00.312     17.455 -    17.571:   96.3228%  (        1)
00:15:00.312     17.687 -    17.804:   96.3471%  (        2)
00:15:00.312     18.036 -    18.153:   96.3592%  (        1)
00:15:00.312     18.153 -    18.269:   96.3835%  (        2)
00:15:00.312     18.385 -    18.502:   96.3956%  (        1)
00:15:00.312     18.502 -    18.618:   96.4199%  (        2)
00:15:00.312     18.618 -    18.735:   96.4684%  (        4)
00:15:00.312     18.735 -    18.851:   96.4806%  (        1)
00:15:00.312     18.851 -    18.967:   96.5049%  (        2)
00:15:00.312     19.084 -    19.200:   96.5170%  (        1)
00:15:00.312     19.316 -    19.433:   96.5413%  (        2)
00:15:00.312     19.433 -    19.549:   96.5655%  (        2)
00:15:00.312     19.549 -    19.665:   96.5777%  (        1)
00:15:00.312     19.782 -    19.898:   96.6019%  (        2)
00:15:00.312     20.015 -    20.131:   96.6383%  (        3)
00:15:00.312     20.131 -    20.247:   96.6748%  (        3)
00:15:00.312     20.247 -    20.364:   96.6869%  (        1)
00:15:00.312     20.480 -    20.596:   96.6990%  (        1)
00:15:00.312     20.713 -    20.829:   96.7112%  (        1)
00:15:00.312     20.945 -    21.062:   96.7354%  (        2)
00:15:00.312     21.062 -    21.178:   96.7597%  (        2)
00:15:00.312     21.178 -    21.295:   96.7840%  (        2)
00:15:00.312     21.527 -    21.644:   96.8083%  (        2)
00:15:00.312     21.644 -    21.760:   96.8204%  (        1)
00:15:00.312     21.760 -    21.876:   96.8325%  (        1)
00:15:00.312     21.876 -    21.993:   96.8447%  (        1)
00:15:00.312     21.993 -    22.109:   96.8568%  (        1)
00:15:00.312     22.109 -    22.225:   97.0267%  (       14)
00:15:00.312     22.225 -    22.342:   97.2694%  (       20)
00:15:00.312     22.342 -    22.458:   97.5121%  (       20)
00:15:00.312     22.458 -    22.575:   97.7913%  (       23)
00:15:00.312     22.575 -    22.691:   97.9733%  (       15)
00:15:00.312     22.691 -    22.807:   98.1311%  (       13)
00:15:00.312     22.807 -    22.924:   98.2646%  (       11)
00:15:00.312     22.924 -    23.040:   98.4345%  (       14)
00:15:00.312     23.040 -    23.156:   98.5558%  (       10)
00:15:00.312     23.156 -    23.273:   98.7379%  (       15)
00:15:00.312     23.273 -    23.389:   98.9927%  (       21)
00:15:00.312     23.389 -    23.505:   99.1505%  (       13)
00:15:00.312     23.505 -    23.622:   99.2354%  (        7)
00:15:00.312     23.622 -    23.738:   99.3325%  (        8)
00:15:00.312     23.738 -    23.855:   99.3568%  (        2)
00:15:00.312     23.855 -    23.971:   99.4417%  (        7)
00:15:00.312     23.971 -    24.087:   99.4903%  (        4)
00:15:00.312     24.087 -    24.204:   99.5388%  (        4)
00:15:00.312     24.204 -    24.320:   99.5510%  (        1)
00:15:00.312     24.320 -    24.436:   99.5631%  (        1)
00:15:00.312     24.553 -    24.669:   99.5752%  (        1)
00:15:00.312     24.902 -    25.018:   99.5874%  (        1)
00:15:00.312     25.018 -    25.135:   99.6359%  (        4)
00:15:00.312     25.484 -    25.600:   99.6481%  (        1)
00:15:00.312     25.600 -    25.716:   99.6602%  (        1)
00:15:00.312     25.716 -    25.833:   99.6723%  (        1)
00:15:00.312     25.833 -    25.949:   99.6845%  (        1)
00:15:00.312     26.065 -    26.182:   99.6966%  (        1)
00:15:00.312     26.996 -    27.113:   99.7087%  (        1)
00:15:00.312     27.695 -    27.811:   99.7209%  (        1)
00:15:00.312     27.927 -    28.044:   99.7330%  (        1)
00:15:00.312     28.509 -    28.625:   99.7451%  (        1)
00:15:00.312     29.091 -    29.207:   99.7573%  (        1)
00:15:00.312     29.673 -    29.789:   99.7694%  (        1)
00:15:00.312     29.789 -    30.022:   99.7937%  (        2)
00:15:00.312     30.487 -    30.720:   99.8058%  (        1)
00:15:00.312     30.953 -    31.185:   99.8180%  (        1)
00:15:00.312     31.185 -    31.418:   99.8422%  (        2)
00:15:00.312     31.418 -    31.651:   99.8544%  (        1)
00:15:00.312     32.582 -    32.815:   99.8786%  (        2)
00:15:00.312     33.280 -    33.513:   99.8908%  (        1)
00:15:00.312     42.124 -    42.356:   99.9150%  (        2)
00:15:00.312     46.313 -    46.545:   99.9272%  (        1)
00:15:00.312     53.993 -    54.225:   99.9393%  (        1)
00:15:00.312     56.087 -    56.320:   99.9515%  (        1)
00:15:00.312     56.785 -    57.018:   99.9636%  (        1)
00:15:00.312     57.949 -    58.182:   99.9757%  (        1)
00:15:00.312    105.193 -   105.658:   99.9879%  (        1)
00:15:00.312    139.636 -   140.567:  100.0000%  (        1)
00:15:00.312  
00:15:00.312  
00:15:00.312  real	0m1.292s
00:15:00.312  user	0m1.130s
00:15:00.312  sys	0m0.098s
00:15:00.312   05:05:14 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:00.312   05:05:14 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:15:00.312  ************************************
00:15:00.312  END TEST nvme_overhead
00:15:00.312  ************************************
00:15:00.312   05:05:14 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:15:00.312   05:05:14 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:15:00.312   05:05:14 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:00.312   05:05:14 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:00.312  ************************************
00:15:00.312  START TEST nvme_arbitration
00:15:00.312  ************************************
00:15:00.312   05:05:14 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:15:03.593  Initializing NVMe Controllers
00:15:03.593  Attached to 0000:00:10.0
00:15:03.593  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:15:03.593  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:15:03.593  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:15:03.593  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:15:03.593  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:15:03.593  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:15:03.593  Initialization complete. Launching workers.
00:15:03.593  Starting thread on core 1 with urgent priority queue
00:15:03.593  Starting thread on core 2 with urgent priority queue
00:15:03.593  Starting thread on core 3 with urgent priority queue
00:15:03.593  Starting thread on core 0 with urgent priority queue
00:15:03.593  QEMU NVMe Ctrl       (12340               ) core 0:  7616.00 IO/s    13.13 secs/100000 ios
00:15:03.593  QEMU NVMe Ctrl       (12340               ) core 1:  7553.00 IO/s    13.24 secs/100000 ios
00:15:03.593  QEMU NVMe Ctrl       (12340               ) core 2:  4193.67 IO/s    23.85 secs/100000 ios
00:15:03.593  QEMU NVMe Ctrl       (12340               ) core 3:  4282.00 IO/s    23.35 secs/100000 ios
00:15:03.593  ========================================================
00:15:03.593  
00:15:03.593  
00:15:03.593  real	0m3.340s
00:15:03.593  user	0m9.204s
00:15:03.593  sys	0m0.084s
00:15:03.593   05:05:17 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:03.593   05:05:17 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:15:03.593  ************************************
00:15:03.593  END TEST nvme_arbitration
00:15:03.593  ************************************
00:15:03.593   05:05:17 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:15:03.593   05:05:17 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:15:03.593   05:05:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:03.593   05:05:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:03.593  ************************************
00:15:03.593  START TEST nvme_single_aen
00:15:03.593  ************************************
00:15:03.593   05:05:17 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:15:03.851  Asynchronous Event Request test
00:15:03.851  Attached to 0000:00:10.0
00:15:03.851  Reset controller to setup AER completions for this process
00:15:03.851  Registering asynchronous event callbacks...
00:15:03.851  Getting orig temperature thresholds of all controllers
00:15:03.851  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:15:03.851  Setting all controllers temperature threshold low to trigger AER
00:15:03.851  Waiting for all controllers temperature threshold to be set lower
00:15:03.851  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:15:03.851  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:15:03.851  Waiting for all controllers to trigger AER and reset threshold
00:15:03.851  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:15:03.851  Cleaning up...
00:15:03.851  
00:15:03.851  real	0m0.263s
00:15:03.851  user	0m0.090s
00:15:03.851  sys	0m0.087s
00:15:03.851   05:05:17 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:03.851   05:05:17 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:15:03.851  ************************************
00:15:03.851  END TEST nvme_single_aen
00:15:03.851  ************************************
00:15:04.110   05:05:17 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:15:04.110   05:05:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:04.110   05:05:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:04.110   05:05:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:04.110  ************************************
00:15:04.110  START TEST nvme_doorbell_aers
00:15:04.110  ************************************
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:04.110     05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:15:04.110     05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:15:04.110    05:05:17 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:15:04.110   05:05:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:15:04.368  [2024-11-20 05:05:18.109128] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135701) is not found. Dropping the request.
00:15:14.398  Executing: test_write_invalid_db
00:15:14.398  Waiting for AER completion...
00:15:14.398  Failure: test_write_invalid_db
00:15:14.398  
00:15:14.398  Executing: test_invalid_db_write_overflow_sq
00:15:14.398  Waiting for AER completion...
00:15:14.398  Failure: test_invalid_db_write_overflow_sq
00:15:14.398  
00:15:14.398  Executing: test_invalid_db_write_overflow_cq
00:15:14.398  Waiting for AER completion...
00:15:14.398  Failure: test_invalid_db_write_overflow_cq
00:15:14.398  
00:15:14.398  
00:15:14.398  real	0m10.094s
00:15:14.398  user	0m8.709s
00:15:14.398  sys	0m1.314s
00:15:14.398   05:05:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:14.398   05:05:27 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:15:14.398  ************************************
00:15:14.398  END TEST nvme_doorbell_aers
00:15:14.398  ************************************
00:15:14.398    05:05:27 nvme -- nvme/nvme.sh@97 -- # uname
00:15:14.398   05:05:27 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']'
00:15:14.398   05:05:27 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:15:14.398   05:05:27 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:15:14.398   05:05:27 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:14.398   05:05:27 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:14.398  ************************************
00:15:14.398  START TEST nvme_multi_aen
00:15:14.398  ************************************
00:15:14.398   05:05:27 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0
00:15:14.398  [2024-11-20 05:05:28.230811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135701) is not found. Dropping the request.
00:15:14.398  [2024-11-20 05:05:28.230998] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135701) is not found. Dropping the request.
00:15:14.398  [2024-11-20 05:05:28.231048] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135701) is not found. Dropping the request.
00:15:14.398  Child process pid: 135894
00:15:14.657  [Child] Asynchronous Event Request test
00:15:14.657  [Child] Attached to 0000:00:10.0
00:15:14.657  [Child] Registering asynchronous event callbacks...
00:15:14.657  [Child] Getting orig temperature thresholds of all controllers
00:15:14.657  [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:15:14.657  [Child] Waiting for all controllers to trigger AER and reset threshold
00:15:14.657  [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:15:14.657  [Child] 0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:15:14.657  [Child] Cleaning up...
00:15:14.915  Asynchronous Event Request test
00:15:14.915  Attached to 0000:00:10.0
00:15:14.915  Reset controller to setup AER completions for this process
00:15:14.915  Registering asynchronous event callbacks...
00:15:14.915  Getting orig temperature thresholds of all controllers
00:15:14.915  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:15:14.915  Setting all controllers temperature threshold low to trigger AER
00:15:14.915  Waiting for all controllers temperature threshold to be set lower
00:15:14.915  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:15:14.915  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:15:14.915  Waiting for all controllers to trigger AER and reset threshold
00:15:14.915  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:15:14.915  Cleaning up...
00:15:14.915  
00:15:14.915  real	0m0.691s
00:15:14.915  user	0m0.271s
00:15:14.915  sys	0m0.240s
00:15:14.915   05:05:28 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:14.915   05:05:28 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x
00:15:14.915  ************************************
00:15:14.915  END TEST nvme_multi_aen
00:15:14.915  ************************************
00:15:14.915   05:05:28 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:15:14.915   05:05:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:14.915   05:05:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:14.915   05:05:28 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:14.915  ************************************
00:15:14.915  START TEST nvme_startup
00:15:14.915  ************************************
00:15:14.915   05:05:28 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000
00:15:15.174  Initializing NVMe Controllers
00:15:15.174  Attached to 0000:00:10.0
00:15:15.174  Initialization complete.
00:15:15.174  Time used:207419.719      (us).
00:15:15.174  
00:15:15.174  real	0m0.291s
00:15:15.174  user	0m0.102s
00:15:15.174  sys	0m0.117s
00:15:15.174   05:05:28 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:15.174   05:05:28 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x
00:15:15.174  ************************************
00:15:15.174  END TEST nvme_startup
00:15:15.174  ************************************
00:15:15.174   05:05:29 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary
00:15:15.174   05:05:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:15.174   05:05:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:15.174   05:05:29 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:15.174  ************************************
00:15:15.174  START TEST nvme_multi_secondary
00:15:15.174  ************************************
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=135961
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=135962
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:15:15.174   05:05:29 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4
00:15:18.457  Initializing NVMe Controllers
00:15:18.457  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:18.457  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:15:18.457  Initialization complete. Launching workers.
00:15:18.457  ========================================================
00:15:18.457                                                                             Latency(us)
00:15:18.457  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:18.457  PCIE (0000:00:10.0) NSID 1 from core  1:   35725.33     139.55     447.57     100.46    1432.67
00:15:18.457  ========================================================
00:15:18.457  Total                                  :   35725.33     139.55     447.57     100.46    1432.67
00:15:18.457  
00:15:18.715  Initializing NVMe Controllers
00:15:18.715  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:18.715  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:15:18.715  Initialization complete. Launching workers.
00:15:18.715  ========================================================
00:15:18.715                                                                             Latency(us)
00:15:18.715  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:18.715  PCIE (0000:00:10.0) NSID 1 from core  2:   15156.37      59.20    1055.31     125.64   28665.95
00:15:18.715  ========================================================
00:15:18.715  Total                                  :   15156.37      59.20    1055.31     125.64   28665.95
00:15:18.715  
00:15:18.974   05:05:32 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 135961
00:15:20.878  Initializing NVMe Controllers
00:15:20.878  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:20.878  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:15:20.878  Initialization complete. Launching workers.
00:15:20.878  ========================================================
00:15:20.878                                                                             Latency(us)
00:15:20.878  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:20.878  PCIE (0000:00:10.0) NSID 1 from core  0:   42398.00     165.62     377.06     107.47    7185.22
00:15:20.878  ========================================================
00:15:20.878  Total                                  :   42398.00     165.62     377.06     107.47    7185.22
00:15:20.878  
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 135962
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=136042
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=136043
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2
00:15:20.878   05:05:34 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4
00:15:24.163  Initializing NVMe Controllers
00:15:24.164  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:24.164  Associating PCIE (0000:00:10.0) NSID 1 with lcore 1
00:15:24.164  Initialization complete. Launching workers.
00:15:24.164  ========================================================
00:15:24.164                                                                             Latency(us)
00:15:24.164  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:24.164  PCIE (0000:00:10.0) NSID 1 from core  1:   36047.00     140.81     443.57     117.24    1421.71
00:15:24.164  ========================================================
00:15:24.164  Total                                  :   36047.00     140.81     443.57     117.24    1421.71
00:15:24.164  
00:15:24.164  Initializing NVMe Controllers
00:15:24.164  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:24.164  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:15:24.164  Initialization complete. Launching workers.
00:15:24.164  ========================================================
00:15:24.164                                                                             Latency(us)
00:15:24.164  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:24.164  PCIE (0000:00:10.0) NSID 1 from core  0:   36582.00     142.90     437.06     107.04    1153.87
00:15:24.164  ========================================================
00:15:24.164  Total                                  :   36582.00     142.90     437.06     107.04    1153.87
00:15:24.164  
00:15:26.696  Initializing NVMe Controllers
00:15:26.696  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:15:26.696  Associating PCIE (0000:00:10.0) NSID 1 with lcore 2
00:15:26.696  Initialization complete. Launching workers.
00:15:26.696  ========================================================
00:15:26.696                                                                             Latency(us)
00:15:26.696  Device Information                     :       IOPS      MiB/s    Average        min        max
00:15:26.696  PCIE (0000:00:10.0) NSID 1 from core  2:   17338.00      67.73     922.32     139.97   29176.15
00:15:26.696  ========================================================
00:15:26.697  Total                                  :   17338.00      67.73     922.32     139.97   29176.15
00:15:26.697  
00:15:26.697   05:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 136042
00:15:26.697   05:05:40 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 136043
00:15:26.697  
00:15:26.697  real	0m11.127s
00:15:26.697  user	0m18.670s
00:15:26.697  sys	0m0.788s
00:15:26.697   05:05:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:26.697   05:05:40 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x
00:15:26.697  ************************************
00:15:26.697  END TEST nvme_multi_secondary
00:15:26.697  ************************************
00:15:26.697   05:05:40 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT
00:15:26.697   05:05:40 nvme -- nvme/nvme.sh@102 -- # kill_stub
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/135246 ]]
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1094 -- # kill 135246
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1095 -- # wait 135246
00:15:26.697  [2024-11-20 05:05:40.210306] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135893) is not found. Dropping the request.
00:15:26.697  [2024-11-20 05:05:40.210490] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135893) is not found. Dropping the request.
00:15:26.697  [2024-11-20 05:05:40.210546] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135893) is not found. Dropping the request.
00:15:26.697  [2024-11-20 05:05:40.210622] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 135893) is not found. Dropping the request.
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1101 -- # echo 2
00:15:26.697   05:05:40 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:26.697   05:05:40 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:26.697  ************************************
00:15:26.697  START TEST bdev_nvme_reset_stuck_adm_cmd
00:15:26.697  ************************************
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:15:26.697  * Looking for test storage...
00:15:26.697  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:26.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:26.697  		--rc genhtml_branch_coverage=1
00:15:26.697  		--rc genhtml_function_coverage=1
00:15:26.697  		--rc genhtml_legend=1
00:15:26.697  		--rc geninfo_all_blocks=1
00:15:26.697  		--rc geninfo_unexecuted_blocks=1
00:15:26.697  		
00:15:26.697  		'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:26.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:26.697  		--rc genhtml_branch_coverage=1
00:15:26.697  		--rc genhtml_function_coverage=1
00:15:26.697  		--rc genhtml_legend=1
00:15:26.697  		--rc geninfo_all_blocks=1
00:15:26.697  		--rc geninfo_unexecuted_blocks=1
00:15:26.697  		
00:15:26.697  		'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:26.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:26.697  		--rc genhtml_branch_coverage=1
00:15:26.697  		--rc genhtml_function_coverage=1
00:15:26.697  		--rc genhtml_legend=1
00:15:26.697  		--rc geninfo_all_blocks=1
00:15:26.697  		--rc geninfo_unexecuted_blocks=1
00:15:26.697  		
00:15:26.697  		'
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:26.697  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:26.697  		--rc genhtml_branch_coverage=1
00:15:26.697  		--rc genhtml_function_coverage=1
00:15:26.697  		--rc genhtml_legend=1
00:15:26.697  		--rc geninfo_all_blocks=1
00:15:26.697  		--rc geninfo_unexecuted_blocks=1
00:15:26.697  		
00:15:26.697  		'
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:26.697      05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:15:26.697      05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:15:26.697     05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:15:26.697    05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=136208
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:15:26.697   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 136208
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 136208 ']'
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:26.698  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:26.698   05:05:40 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:26.698  [2024-11-20 05:05:40.644977] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:15:26.698  [2024-11-20 05:05:40.645861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136208 ]
00:15:26.957  [2024-11-20 05:05:40.839043] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:15:26.957  [2024-11-20 05:05:40.869369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:15:27.216  [2024-11-20 05:05:40.913298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:27.216  [2024-11-20 05:05:40.913424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:15:27.216  [2024-11-20 05:05:40.913562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:27.216  [2024-11-20 05:05:40.913586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:27.785  nvme0n1
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:27.785    05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_NuvxY.txt
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:27.785  true
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:27.785    05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732079141
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=136234
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:15:27.785   05:05:41 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:30.319  [2024-11-20 05:05:43.719025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:15:30.319  [2024-11-20 05:05:43.719930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:15:30.319  [2024-11-20 05:05:43.720295] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:15:30.319  [2024-11-20 05:05:43.720483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:15:30.319  [2024-11-20 05:05:43.722645] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:30.319  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 136234
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 136234
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 136234
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_NuvxY.txt
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:15:30.319     05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:15:30.319     05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:15:30.319      05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:15:30.319     05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63
00:15:30.319     05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:15:30.319      05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:15:30.319    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_NuvxY.txt
00:15:30.319   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 136208
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 136208 ']'
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 136208
00:15:30.320    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:30.320    05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136208
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:30.320  killing process with pid 136208
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136208'
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 136208
00:15:30.320   05:05:43 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 136208
00:15:30.579   05:05:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:15:30.579   05:05:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:15:30.579  
00:15:30.579  real	0m4.121s
00:15:30.579  user	0m14.603s
00:15:30.579  sys	0m0.655s
00:15:30.579   05:05:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:30.579   05:05:44 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:15:30.579  ************************************
00:15:30.579  END TEST bdev_nvme_reset_stuck_adm_cmd
00:15:30.579  ************************************
00:15:30.579   05:05:44 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:15:30.579   05:05:44 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:15:30.579   05:05:44 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:30.579   05:05:44 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:30.579   05:05:44 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:30.579  ************************************
00:15:30.579  START TEST nvme_fio
00:15:30.579  ************************************
00:15:30.579   05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:15:30.579   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:15:30.579   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:15:30.579    05:05:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:15:30.579    05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:15:30.579    05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:15:30.579    05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:30.579     05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:15:30.579     05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:15:30.839    05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:15:30.839    05:05:44 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0')
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:15:30.839   05:05:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:15:31.407   05:05:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:15:31.407   05:05:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:15:31.407    05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:15:31.407    05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:15:31.407    05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/lib/x86_64-linux-gnu/libasan.so.6
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /lib/x86_64-linux-gnu/libasan.so.6 ]]
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/lib/x86_64-linux-gnu/libasan.so.6 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:15:31.407   05:05:45 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:15:31.407  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:15:31.407  fio-3.35
00:15:31.407  Starting 1 thread
00:15:34.697  
00:15:34.697  test: (groupid=0, jobs=1): err= 0: pid=136372: Wed Nov 20 05:05:48 2024
00:15:34.697    read: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec)
00:15:34.697      slat (usec): min=3, max=102, avg= 5.77, stdev= 3.57
00:15:34.697      clat (usec): min=292, max=10136, avg=4037.49, stdev=281.27
00:15:34.697       lat (usec): min=297, max=10238, avg=4043.26, stdev=281.64
00:15:34.697      clat percentiles (usec):
00:15:34.697       |  1.00th=[ 3654],  5.00th=[ 3752], 10.00th=[ 3818], 20.00th=[ 3884],
00:15:34.697       | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047],
00:15:34.697       | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4424],
00:15:34.697       | 99.00th=[ 4817], 99.50th=[ 4948], 99.90th=[ 7046], 99.95th=[ 8848],
00:15:34.697       | 99.99th=[10028]
00:15:34.697     bw (  KiB/s): min=61045, max=64784, per=100.00%, avg=63092.33, stdev=1894.70, samples=3
00:15:34.697     iops        : min=15261, max=16196, avg=15773.00, stdev=473.81, samples=3
00:15:34.697    write: IOPS=15.8k, BW=61.6MiB/s (64.6MB/s)(123MiB/2001msec); 0 zone resets
00:15:34.697      slat (nsec): min=3940, max=63574, avg=5941.34, stdev=3624.44
00:15:34.697      clat (usec): min=224, max=10056, avg=4054.61, stdev=284.46
00:15:34.697       lat (usec): min=229, max=10069, avg=4060.55, stdev=284.71
00:15:34.697      clat percentiles (usec):
00:15:34.697       |  1.00th=[ 3654],  5.00th=[ 3785], 10.00th=[ 3818], 20.00th=[ 3884],
00:15:34.697       | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4080],
00:15:34.697       | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4424],
00:15:34.697       | 99.00th=[ 4817], 99.50th=[ 5014], 99.90th=[ 7504], 99.95th=[ 8979],
00:15:34.697       | 99.99th=[ 9896]
00:15:34.697     bw (  KiB/s): min=61381, max=64128, per=99.51%, avg=62793.67, stdev=1375.17, samples=3
00:15:34.697     iops        : min=15345, max=16032, avg=15698.33, stdev=343.92, samples=3
00:15:34.697    lat (usec)   : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.02%
00:15:34.697    lat (msec)   : 2=0.05%, 4=44.80%, 10=55.10%, 20=0.01%
00:15:34.697    cpu          : usr=99.80%, sys=0.15%, ctx=6, majf=0, minf=39
00:15:34.697    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%
00:15:34.697       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:15:34.697       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:15:34.697       issued rwts: total=31544,31567,0,0 short=0,0,0,0 dropped=0,0,0,0
00:15:34.697       latency   : target=0, window=0, percentile=100.00%, depth=128
00:15:34.697  
00:15:34.697  Run status group 0 (all jobs):
00:15:34.697     READ: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec
00:15:34.697    WRITE: bw=61.6MiB/s (64.6MB/s), 61.6MiB/s-61.6MiB/s (64.6MB/s-64.6MB/s), io=123MiB (129MB), run=2001-2001msec
00:15:34.697  -----------------------------------------------------
00:15:34.697  Suppressions used:
00:15:34.697    count      bytes template
00:15:34.697        1         32 /usr/src/fio/parse.c
00:15:34.697  -----------------------------------------------------
00:15:34.697  
00:15:34.697   05:05:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:15:34.697   05:05:48 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:15:34.697  
00:15:34.697  real	0m4.050s
00:15:34.697  user	0m3.335s
00:15:34.697  sys	0m0.387s
00:15:34.697   05:05:48 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:34.697   05:05:48 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:15:34.697  ************************************
00:15:34.697  END TEST nvme_fio
00:15:34.697  ************************************
00:15:34.697  
00:15:34.697  real	0m45.200s
00:15:34.697  user	2m0.687s
00:15:34.697  sys	0m7.778s
00:15:34.697   05:05:48 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:34.697   05:05:48 nvme -- common/autotest_common.sh@10 -- # set +x
00:15:34.697  ************************************
00:15:34.697  END TEST nvme
00:15:34.697  ************************************
00:15:34.697   05:05:48  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:15:34.697   05:05:48  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:15:34.697   05:05:48  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:34.697   05:05:48  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:34.697   05:05:48  -- common/autotest_common.sh@10 -- # set +x
00:15:34.697  ************************************
00:15:34.697  START TEST nvme_scc
00:15:34.697  ************************************
00:15:34.697   05:05:48 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:15:34.957  * Looking for test storage...
00:15:34.957  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:34.957      05:05:48 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version
00:15:34.957      05:05:48 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@345 -- # : 1
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:34.957     05:05:48 nvme_scc -- scripts/common.sh@368 -- # return 0
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:34.957  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.957  		--rc genhtml_branch_coverage=1
00:15:34.957  		--rc genhtml_function_coverage=1
00:15:34.957  		--rc genhtml_legend=1
00:15:34.957  		--rc geninfo_all_blocks=1
00:15:34.957  		--rc geninfo_unexecuted_blocks=1
00:15:34.957  		
00:15:34.957  		'
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:34.957  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.957  		--rc genhtml_branch_coverage=1
00:15:34.957  		--rc genhtml_function_coverage=1
00:15:34.957  		--rc genhtml_legend=1
00:15:34.957  		--rc geninfo_all_blocks=1
00:15:34.957  		--rc geninfo_unexecuted_blocks=1
00:15:34.957  		
00:15:34.957  		'
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:34.957  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.957  		--rc genhtml_branch_coverage=1
00:15:34.957  		--rc genhtml_function_coverage=1
00:15:34.957  		--rc genhtml_legend=1
00:15:34.957  		--rc geninfo_all_blocks=1
00:15:34.957  		--rc geninfo_unexecuted_blocks=1
00:15:34.957  		
00:15:34.957  		'
00:15:34.957     05:05:48 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:34.957  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:34.957  		--rc genhtml_branch_coverage=1
00:15:34.957  		--rc genhtml_function_coverage=1
00:15:34.957  		--rc genhtml_legend=1
00:15:34.957  		--rc geninfo_all_blocks=1
00:15:34.957  		--rc geninfo_unexecuted_blocks=1
00:15:34.957  		
00:15:34.957  		'
00:15:34.957    05:05:48 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:15:34.957       05:05:48 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:15:34.957      05:05:48 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:15:34.957      05:05:48 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:15:34.957       05:05:48 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:34.957       05:05:48 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:34.957       05:05:48 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:34.957       05:05:48 nvme_scc -- paths/export.sh@5 -- # export PATH
00:15:34.957       05:05:48 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/protoc/21.7/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:15:34.957     05:05:48 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:15:34.957    05:05:48 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:34.957    05:05:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:15:34.957   05:05:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]]
00:15:34.957   05:05:48 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]]
00:15:34.957   05:05:48 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:15:35.216  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:15:35.476  Waiting for block devices as requested
00:15:35.476  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:15:35.476   05:05:49 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme*
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0
00:15:35.476   05:05:49 nvme_scc -- scripts/common.sh@18 -- # local i
00:15:35.476   05:05:49 nvme_scc -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:15:35.476   05:05:49 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]]
00:15:35.476   05:05:49 nvme_scc -- scripts/common.sh@27 -- # return 0
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()'
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1b36 ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1af4 ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  12340                ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12340               "'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12340               '
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  QEMU NVMe Ctrl                           ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl                          "'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl                          '
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  8.0.0    ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0   "'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0   '
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  6 ]]
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"'
00:15:35.476    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6
00:15:35.476   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  525400 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x10400 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x100 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x8000 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000-0000-0000-0000-000000000000 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x12a ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  3 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  343 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  373 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"'
00:15:35.477    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.477   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x66 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x44 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  256 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x15d ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x7 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"'
00:15:35.478    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0
00:15:35.478   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  nqn.2019-08.org.qemu:12340 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12340"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12340
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0'
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-'
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=-
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"*
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@18 -- # shift
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()'
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x140000 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x14 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  7 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x4 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x3 ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0x1f ]]
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"'
00:15:35.479    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.479   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  1 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  128 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  127 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  00000000000000000000000000000000 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  0000000000000000 ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:9  rp:0  ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0   lbads:9  rp:0 "'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0   lbads:9  rp:0 '
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:9  rp:0  ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8   lbads:9  rp:0 "'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8   lbads:9  rp:0 '
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:9  rp:0  ]]
00:15:35.480   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16  lbads:9  rp:0 "'
00:15:35.480    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16  lbads:9  rp:0 '
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:9  rp:0  ]]
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64  lbads:9  rp:0 "'
00:15:35.481    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64  lbads:9  rp:0 '
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:0   lbads:12 rp:0 (in use) ]]
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0   lbads:12 rp:0 (in use)"'
00:15:35.481    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0   lbads:12 rp:0 (in use)'
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:8   lbads:12 rp:0  ]]
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8   lbads:12 rp:0 "'
00:15:35.481    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8   lbads:12 rp:0 '
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:16  lbads:12 rp:0  ]]
00:15:35.481   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16  lbads:12 rp:0 "'
00:15:35.739    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16  lbads:12 rp:0 '
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@22 -- # [[ -n  ms:64  lbads:12 rp:0  ]]
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64  lbads:12 rp:0 "'
00:15:35.739    05:05:49 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64  lbads:12 rp:0 '
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # IFS=:
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0
00:15:35.739   05:05:49 nvme_scc -- nvme/functions.sh@65 -- # (( 1 > 0 ))
00:15:35.739    05:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc
00:15:35.739    05:05:49 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc
00:15:35.739    05:05:49 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature"))
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@192 -- # (( 1 == 0 ))
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]]
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}"
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0
00:15:35.739     05:05:49 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]]
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0
00:15:35.739      05:05:49 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]]
00:15:35.740      05:05:49 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d
00:15:35.740     05:05:49 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d
00:15:35.740     05:05:49 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 ))
00:15:35.740     05:05:49 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0
00:15:35.740    05:05:49 nvme_scc -- nvme/functions.sh@207 -- # (( 1 > 0 ))
00:15:35.740    05:05:49 nvme_scc -- nvme/functions.sh@208 -- # echo nvme0
00:15:35.740    05:05:49 nvme_scc -- nvme/functions.sh@209 -- # return 0
00:15:35.740   05:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme0
00:15:35.740   05:05:49 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0
00:15:35.740   05:05:49 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:15:35.999  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:15:35.999  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:15:37.376   05:05:51 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:15:37.376   05:05:51 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:15:37.376   05:05:51 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:37.376   05:05:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:15:37.376  ************************************
00:15:37.376  START TEST nvme_simple_copy
00:15:37.376  ************************************
00:15:37.376   05:05:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0'
00:15:37.635  Initializing NVMe Controllers
00:15:37.635  Attaching to 0000:00:10.0
00:15:37.635  Controller supports SCC. Attached to 0000:00:10.0
00:15:37.635    Namespace ID: 1 size: 5GB
00:15:37.635  Initialization complete.
00:15:37.635  
00:15:37.635  Controller QEMU NVMe Ctrl       (12340               )
00:15:37.635  Controller PCI vendor:6966 PCI subsystem vendor:6900
00:15:37.635  Namespace Block Size:4096
00:15:37.635  Writing LBAs 0 to 63 with Random Data
00:15:37.635  Copied LBAs from 0 - 63 to the Destination LBA 256
00:15:37.635  LBAs matching Written Data: 64
00:15:37.635  
00:15:37.635  real	0m0.321s
00:15:37.635  user	0m0.144s
00:15:37.635  sys	0m0.078s
00:15:37.635   05:05:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:37.635   05:05:51 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x
00:15:37.635  ************************************
00:15:37.635  END TEST nvme_simple_copy
00:15:37.635  ************************************
00:15:37.635  
00:15:37.635  real	0m2.763s
00:15:37.635  user	0m0.938s
00:15:37.635  sys	0m1.739s
00:15:37.635   05:05:51 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:37.635   05:05:51 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:15:37.635  ************************************
00:15:37.635  END TEST nvme_scc
00:15:37.635  ************************************
00:15:37.635   05:05:51  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:15:37.635   05:05:51  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:15:37.635   05:05:51  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:15:37.635   05:05:51  -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]]
00:15:37.635   05:05:51  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:15:37.635   05:05:51  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:15:37.635   05:05:51  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:37.635   05:05:51  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:37.635   05:05:51  -- common/autotest_common.sh@10 -- # set +x
00:15:37.635  ************************************
00:15:37.635  START TEST nvme_rpc
00:15:37.635  ************************************
00:15:37.635   05:05:51 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:15:37.635  * Looking for test storage...
00:15:37.635  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:15:37.635    05:05:51 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:37.635     05:05:51 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version
00:15:37.635     05:05:51 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:37.895     05:05:51 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:37.895    05:05:51 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:37.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:37.895  		--rc genhtml_branch_coverage=1
00:15:37.895  		--rc genhtml_function_coverage=1
00:15:37.895  		--rc genhtml_legend=1
00:15:37.895  		--rc geninfo_all_blocks=1
00:15:37.895  		--rc geninfo_unexecuted_blocks=1
00:15:37.895  		
00:15:37.895  		'
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:37.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:37.895  		--rc genhtml_branch_coverage=1
00:15:37.895  		--rc genhtml_function_coverage=1
00:15:37.895  		--rc genhtml_legend=1
00:15:37.895  		--rc geninfo_all_blocks=1
00:15:37.895  		--rc geninfo_unexecuted_blocks=1
00:15:37.895  		
00:15:37.895  		'
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:37.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:37.895  		--rc genhtml_branch_coverage=1
00:15:37.895  		--rc genhtml_function_coverage=1
00:15:37.895  		--rc genhtml_legend=1
00:15:37.895  		--rc geninfo_all_blocks=1
00:15:37.895  		--rc geninfo_unexecuted_blocks=1
00:15:37.895  		
00:15:37.895  		'
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:37.895  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:37.895  		--rc genhtml_branch_coverage=1
00:15:37.895  		--rc genhtml_function_coverage=1
00:15:37.895  		--rc genhtml_legend=1
00:15:37.895  		--rc geninfo_all_blocks=1
00:15:37.895  		--rc geninfo_unexecuted_blocks=1
00:15:37.895  		
00:15:37.895  		'
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:37.895    05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:15:37.895      05:05:51 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:15:37.895      05:05:51 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:15:37.895     05:05:51 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:15:37.895    05:05:51 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=136877
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:15:37.895   05:05:51 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 136877
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 136877 ']'
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:37.895  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:37.895   05:05:51 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:37.895  [2024-11-20 05:05:51.789248] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:15:37.895  [2024-11-20 05:05:51.789558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136877 ]
00:15:38.154  [2024-11-20 05:05:51.956922] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:15:38.154  [2024-11-20 05:05:51.986746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:38.154  [2024-11-20 05:05:52.038749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:38.154  [2024-11-20 05:05:52.038769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:39.090   05:05:52 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:39.090   05:05:52 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:15:39.090   05:05:52 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:15:39.349  Nvme0n1
00:15:39.349   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:15:39.349   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:15:39.607  request:
00:15:39.608  {
00:15:39.608    "bdev_name": "Nvme0n1",
00:15:39.608    "filename": "non_existing_file",
00:15:39.608    "method": "bdev_nvme_apply_firmware",
00:15:39.608    "req_id": 1
00:15:39.608  }
00:15:39.608  Got JSON-RPC error response
00:15:39.608  response:
00:15:39.608  {
00:15:39.608    "code": -32603,
00:15:39.608    "message": "open file failed."
00:15:39.608  }
00:15:39.608   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:15:39.608   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:15:39.608   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:15:39.867   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:15:39.867   05:05:53 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 136877
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 136877 ']'
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 136877
00:15:39.867    05:05:53 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:39.867    05:05:53 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136877
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:39.867  killing process with pid 136877
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136877'
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@973 -- # kill 136877
00:15:39.867   05:05:53 nvme_rpc -- common/autotest_common.sh@978 -- # wait 136877
00:15:40.435  ************************************
00:15:40.435  END TEST nvme_rpc
00:15:40.435  ************************************
00:15:40.435  
00:15:40.435  real	0m2.717s
00:15:40.435  user	0m5.304s
00:15:40.435  sys	0m0.678s
00:15:40.435   05:05:54 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:40.435   05:05:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:15:40.435   05:05:54  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:15:40.435   05:05:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:40.435   05:05:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:40.435   05:05:54  -- common/autotest_common.sh@10 -- # set +x
00:15:40.435  ************************************
00:15:40.435  START TEST nvme_rpc_timeouts
00:15:40.435  ************************************
00:15:40.435   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:15:40.435  * Looking for test storage...
00:15:40.435  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:15:40.435    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:40.435     05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version
00:15:40.435     05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:40.694     05:05:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:40.694    05:05:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:40.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:40.694  		--rc genhtml_branch_coverage=1
00:15:40.694  		--rc genhtml_function_coverage=1
00:15:40.694  		--rc genhtml_legend=1
00:15:40.694  		--rc geninfo_all_blocks=1
00:15:40.694  		--rc geninfo_unexecuted_blocks=1
00:15:40.694  		
00:15:40.694  		'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:40.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:40.694  		--rc genhtml_branch_coverage=1
00:15:40.694  		--rc genhtml_function_coverage=1
00:15:40.694  		--rc genhtml_legend=1
00:15:40.694  		--rc geninfo_all_blocks=1
00:15:40.694  		--rc geninfo_unexecuted_blocks=1
00:15:40.694  		
00:15:40.694  		'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:40.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:40.694  		--rc genhtml_branch_coverage=1
00:15:40.694  		--rc genhtml_function_coverage=1
00:15:40.694  		--rc genhtml_legend=1
00:15:40.694  		--rc geninfo_all_blocks=1
00:15:40.694  		--rc geninfo_unexecuted_blocks=1
00:15:40.694  		
00:15:40.694  		'
00:15:40.694    05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:40.694  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:40.694  		--rc genhtml_branch_coverage=1
00:15:40.694  		--rc genhtml_function_coverage=1
00:15:40.694  		--rc genhtml_legend=1
00:15:40.694  		--rc geninfo_all_blocks=1
00:15:40.694  		--rc geninfo_unexecuted_blocks=1
00:15:40.694  		
00:15:40.694  		'
00:15:40.694   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:15:40.694   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_136948
00:15:40.694   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_136948
00:15:40.694   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=136991
00:15:40.694   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:15:40.695   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:15:40.695   05:05:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 136991
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 136991 ']'
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:15:40.695  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:15:40.695   05:05:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:15:40.695  [2024-11-20 05:05:54.475410] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:15:40.695  [2024-11-20 05:05:54.475669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid136991 ]
00:15:40.695  [2024-11-20 05:05:54.620885] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:15:40.695  [2024-11-20 05:05:54.639491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:15:40.953  [2024-11-20 05:05:54.681066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:15:40.953  [2024-11-20 05:05:54.681079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:15:41.519   05:05:55 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:15:41.519   05:05:55 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:15:41.519  Checking default timeout settings:
00:15:41.519   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:15:41.519   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:15:42.083  Making settings changes with rpc:
00:15:42.083   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:15:42.083   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:15:42.083  Check default vs. modified settings:
00:15:42.084   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:15:42.084   05:05:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:15:42.651  Setting action_on_timeout is changed as expected.
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:15:42.651  Setting timeout_us is changed as expected.
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_136948
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:15:42.651  Setting timeout_admin_us is changed as expected.
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_136948 /tmp/settings_modified_136948
00:15:42.651   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 136991
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 136991 ']'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 136991
00:15:42.651    05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:15:42.651    05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 136991
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:15:42.651  killing process with pid 136991
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 136991'
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 136991
00:15:42.651   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 136991
00:15:43.219  RPC TIMEOUT SETTING TEST PASSED.
00:15:43.219   05:05:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:15:43.219  ************************************
00:15:43.219  END TEST nvme_rpc_timeouts
00:15:43.219  ************************************
00:15:43.219  
00:15:43.219  real	0m2.747s
00:15:43.219  user	0m5.477s
00:15:43.219  sys	0m0.650s
00:15:43.219   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:15:43.219   05:05:56 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:15:43.219    05:05:57  -- spdk/autotest.sh@239 -- # uname -s
00:15:43.219   05:05:57  -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']'
00:15:43.219   05:05:57  -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:15:43.219   05:05:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:15:43.219   05:05:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:15:43.219   05:05:57  -- common/autotest_common.sh@10 -- # set +x
00:15:43.219  ************************************
00:15:43.219  START TEST sw_hotplug
00:15:43.219  ************************************
00:15:43.219   05:05:57 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh
00:15:43.219  * Looking for test storage...
00:15:43.219  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:15:43.219    05:05:57 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]]
00:15:43.219     05:05:57 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version
00:15:43.219     05:05:57 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}'
00:15:43.478    05:05:57 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-:
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-:
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<'
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@345 -- # : 1
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 ))
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:15:43.478     05:05:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:15:43.478    05:05:57 sw_hotplug -- scripts/common.sh@368 -- # return 0
00:15:43.478    05:05:57 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:15:43.478    05:05:57 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS=
00:15:43.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:43.478  		--rc genhtml_branch_coverage=1
00:15:43.478  		--rc genhtml_function_coverage=1
00:15:43.478  		--rc genhtml_legend=1
00:15:43.478  		--rc geninfo_all_blocks=1
00:15:43.478  		--rc geninfo_unexecuted_blocks=1
00:15:43.478  		
00:15:43.478  		'
00:15:43.478    05:05:57 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS='
00:15:43.478  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:43.478  		--rc genhtml_branch_coverage=1
00:15:43.478  		--rc genhtml_function_coverage=1
00:15:43.478  		--rc genhtml_legend=1
00:15:43.478  		--rc geninfo_all_blocks=1
00:15:43.478  		--rc geninfo_unexecuted_blocks=1
00:15:43.478  		
00:15:43.478  		'
00:15:43.479    05:05:57 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 
00:15:43.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:43.479  		--rc genhtml_branch_coverage=1
00:15:43.479  		--rc genhtml_function_coverage=1
00:15:43.479  		--rc genhtml_legend=1
00:15:43.479  		--rc geninfo_all_blocks=1
00:15:43.479  		--rc geninfo_unexecuted_blocks=1
00:15:43.479  		
00:15:43.479  		'
00:15:43.479    05:05:57 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 
00:15:43.479  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:15:43.479  		--rc genhtml_branch_coverage=1
00:15:43.479  		--rc genhtml_function_coverage=1
00:15:43.479  		--rc genhtml_legend=1
00:15:43.479  		--rc geninfo_all_blocks=1
00:15:43.479  		--rc geninfo_unexecuted_blocks=1
00:15:43.479  		
00:15:43.479  		'
00:15:43.479   05:05:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:15:43.737  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:15:43.737  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace))
00:15:44.672    05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]]
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02))
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf=
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@233 -- # local class
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@235 -- # local progif
00:15:44.672       05:05:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@236 -- # class=01
00:15:44.672       05:05:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08
00:15:44.672       05:05:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']'
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}'
00:15:44.672      05:05:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"'
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@")
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@18 -- # local i
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@21 -- # [[    =~  0000:00:10.0  ]]
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]]
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@27 -- # return 0
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}"
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]]
00:15:44.672     05:05:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]]
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf")
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@328 -- # (( 1 ))
00:15:44.672    05:05:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=1
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}")
00:15:44.672   05:05:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:15:45.241  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:15:45.241  Waiting for block devices as requested
00:15:45.241  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:15:45.241   05:05:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED=0000:00:10.0
00:15:45.241   05:05:59 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:15:45.500  0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0
00:15:45.500  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:15:45.765  0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable
00:15:46.771   05:06:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=137559
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 3 -r 3 -l warning
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false
00:15:46.771   05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:15:46.772    05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false
00:15:46.772    05:06:00 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:15:46.772    05:06:00 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:15:46.772    05:06:00 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:15:46.772    05:06:00 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:15:46.772     05:06:00 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false
00:15:46.772     05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:15:46.772     05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:15:46.772     05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false
00:15:46.772     05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:15:46.772     05:06:00 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:15:47.030  Initializing NVMe Controllers
00:15:47.030  Attaching to 0000:00:10.0
00:15:47.030  Attached to 0000:00:10.0
00:15:47.030  Initialization complete. Starting I/O...
00:15:47.030  QEMU NVMe Ctrl       (12340               ):          0 I/Os completed (+0)
00:15:47.030  
00:15:48.408  QEMU NVMe Ctrl       (12340               ):       2445 I/Os completed (+2445)
00:15:48.408  
00:15:49.358  QEMU NVMe Ctrl       (12340               ):       5729 I/Os completed (+3284)
00:15:49.358  
00:15:50.298  QEMU NVMe Ctrl       (12340               ):       9202 I/Os completed (+3473)
00:15:50.299  
00:15:51.236  QEMU NVMe Ctrl       (12340               ):      12746 I/Os completed (+3544)
00:15:51.236  
00:15:52.174  QEMU NVMe Ctrl       (12340               ):      16286 I/Os completed (+3540)
00:15:52.174  
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:53.112  [2024-11-20 05:06:06.728601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:53.112  Controller removed: QEMU NVMe Ctrl       (12340               )
00:15:53.112  [2024-11-20 05:06:06.729941] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.730055] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.730080] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.730100] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:53.112  [2024-11-20 05:06:06.731454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.731519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.731540] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112  [2024-11-20 05:06:06.731558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:53.112     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:53.113     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:53.113     05:06:06 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:53.113  Attaching to 0000:00:10.0
00:15:53.113  Attached to 0000:00:10.0
00:15:53.113  QEMU NVMe Ctrl       (12340               ):        261 I/Os completed (+261)
00:15:53.113  
00:15:54.050  QEMU NVMe Ctrl       (12340               ):       3769 I/Os completed (+3508)
00:15:54.050  
00:15:55.445  QEMU NVMe Ctrl       (12340               ):       7317 I/Os completed (+3548)
00:15:55.445  
00:15:56.019  QEMU NVMe Ctrl       (12340               ):      10873 I/Os completed (+3556)
00:15:56.019  
00:15:57.397  QEMU NVMe Ctrl       (12340               ):      14425 I/Os completed (+3552)
00:15:57.397  
00:15:58.335  QEMU NVMe Ctrl       (12340               ):      17937 I/Os completed (+3512)
00:15:58.335  
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:15:59.273  [2024-11-20 05:06:12.876420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:15:59.273  Controller removed: QEMU NVMe Ctrl       (12340               )
00:15:59.273  [2024-11-20 05:06:12.877591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.877643] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.877665] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.877685] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:15:59.273  [2024-11-20 05:06:12.879011] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.879062] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.879081] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273  [2024-11-20 05:06:12.879097] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:15:59.273     05:06:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:15:59.273  
00:15:59.273     05:06:13 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:15:59.273     05:06:13 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:15:59.273     05:06:13 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:15:59.273  Attaching to 0000:00:10.0
00:15:59.273  Attached to 0000:00:10.0
00:16:00.210  QEMU NVMe Ctrl       (12340               ):       3098 I/Os completed (+3098)
00:16:00.210  
00:16:01.147  QEMU NVMe Ctrl       (12340               ):       6586 I/Os completed (+3488)
00:16:01.147  
00:16:02.084  QEMU NVMe Ctrl       (12340               ):      10102 I/Os completed (+3516)
00:16:02.084  
00:16:03.021  QEMU NVMe Ctrl       (12340               ):      13626 I/Os completed (+3524)
00:16:03.021  
00:16:04.399  QEMU NVMe Ctrl       (12340               ):      17094 I/Os completed (+3468)
00:16:04.399  
00:16:05.336  QEMU NVMe Ctrl       (12340               ):      20606 I/Os completed (+3512)
00:16:05.336  
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:05.336  [2024-11-20 05:06:19.070614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:05.336  Controller removed: QEMU NVMe Ctrl       (12340               )
00:16:05.336  [2024-11-20 05:06:19.071715] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.071762] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.071782] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.071801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:16:05.336  [2024-11-20 05:06:19.073054] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.073093] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.073112] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  [2024-11-20 05:06:19.073129] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:05.336  EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor
00:16:05.336  EAL: Scan for (pci) bus failed.
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:05.336     05:06:19 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:05.336  Attaching to 0000:00:10.0
00:16:05.336  Attached to 0000:00:10.0
00:16:05.336  unregister_dev: QEMU NVMe Ctrl       (12340               )
00:16:05.336  [2024-11-20 05:06:19.251705] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09
00:16:11.899     05:06:25 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false
00:16:11.899     05:06:25 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:11.899    05:06:25 sw_hotplug -- common/autotest_common.sh@719 -- # time=24.53
00:16:11.899    05:06:25 sw_hotplug -- common/autotest_common.sh@720 -- # echo 24.53
00:16:11.899    05:06:25 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:16:11.899   05:06:25 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=24.53
00:16:11.899   05:06:25 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 24.53 1
00:16:11.899  remove_attach_helper took 24.53s to complete (handling 1 nvme drive(s)) 05:06:25 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 137559
00:16:18.461  /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (137559) - No such process
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 137559
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=137931
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT
00:16:18.461   05:06:31 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 137931
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 137931 ']'
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100
00:16:18.461  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable
00:16:18.461   05:06:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:18.461  [2024-11-20 05:06:31.344875] Starting SPDK v25.01-pre git sha1 f22e807f1 / DPDK 24.11.0-rc3 initialization...
00:16:18.461  [2024-11-20 05:06:31.345166] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid137931 ]
00:16:18.461  [2024-11-20 05:06:31.496446] pci_dpdk.c:  37:dpdk_pci_init: *NOTICE*: In-development DPDK 24.11.0-rc3 is used. There is no support for it in SPDK. Enabled only for validation.
00:16:18.461  [2024-11-20 05:06:31.529963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:16:18.461  [2024-11-20 05:06:31.577133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:16:18.461   05:06:32 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:16:18.461   05:06:32 sw_hotplug -- common/autotest_common.sh@868 -- # return 0
00:16:18.461   05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:16:18.461   05:06:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:18.461   05:06:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:18.461   05:06:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:18.461   05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true
00:16:18.461   05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:16:18.461    05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:16:18.461    05:06:32 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:16:18.461    05:06:32 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:16:18.461    05:06:32 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:16:18.461    05:06:32 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:16:18.461     05:06:32 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:16:18.461     05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:16:18.461     05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:16:18.461     05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:16:18.461     05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:16:18.461     05:06:32 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:16:25.038     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:25.038     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:25.038     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:25.038     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:25.038     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:25.038      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:25.038      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:25.039      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:25.039       05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:25.039  [2024-11-20 05:06:38.326666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:25.039  [2024-11-20 05:06:38.328189] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:25.039  [2024-11-20 05:06:38.328250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:25.039  [2024-11-20 05:06:38.328312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:25.039  [2024-11-20 05:06:38.328354] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:25.039  [2024-11-20 05:06:38.328379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:25.039  [2024-11-20 05:06:38.328404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:25.039  [2024-11-20 05:06:38.328423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:25.039  [2024-11-20 05:06:38.328448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:25.039  [2024-11-20 05:06:38.328487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:25.039  [2024-11-20 05:06:38.328507] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:25.039  [2024-11-20 05:06:38.328532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:25.039  [2024-11-20 05:06:38.328552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:25.039      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:25.039      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:25.039      05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:25.039       05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:25.039       05:06:38 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:25.039     05:06:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:25.298     05:06:39 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:25.298     05:06:39 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:31.900       05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:31.900       05:06:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.900       05:06:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:31.900       05:06:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:31.900     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:31.900      05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:31.901       05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:31.901       05:06:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:31.901       05:06:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:31.901  [2024-11-20 05:06:45.126760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:31.901  [2024-11-20 05:06:45.129943] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:31.901  [2024-11-20 05:06:45.130194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:31.901  [2024-11-20 05:06:45.130375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:31.901  [2024-11-20 05:06:45.130602] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:31.901  [2024-11-20 05:06:45.130784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:31.901  [2024-11-20 05:06:45.130963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:31.901  [2024-11-20 05:06:45.131137] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:31.901  [2024-11-20 05:06:45.131325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:31.901  [2024-11-20 05:06:45.131552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:31.901  [2024-11-20 05:06:45.131749] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:31.901  [2024-11-20 05:06:45.131947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:31.901  [2024-11-20 05:06:45.132128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:31.901       05:06:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:31.901     05:06:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:38.458       05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:38.458       05:06:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.458       05:06:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:38.458       05:06:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:38.458     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:38.458       05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:38.458      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:38.458       05:06:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.458       05:06:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:38.458  [2024-11-20 05:06:51.426804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:38.458  [2024-11-20 05:06:51.429510] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:38.458  [2024-11-20 05:06:51.429600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:38.458  [2024-11-20 05:06:51.429642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:38.458  [2024-11-20 05:06:51.429705] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:38.458  [2024-11-20 05:06:51.429787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:38.458  [2024-11-20 05:06:51.429837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:38.458  [2024-11-20 05:06:51.429888] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:38.458  [2024-11-20 05:06:51.429929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:38.458  [2024-11-20 05:06:51.429969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:38.458  [2024-11-20 05:06:51.430009] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:38.458  [2024-11-20 05:06:51.430043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:38.458  [2024-11-20 05:06:51.430100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:38.459       05:06:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.459     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:16:38.459     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:16:38.459     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:16:38.459     05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:38.459      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:38.459      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:38.459      05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:38.459       05:06:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:38.459       05:06:51 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:38.459       05:06:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:38.459       05:06:51 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:38.459     05:06:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:45.026     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:45.026     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:45.026      05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:45.026      05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:45.026      05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:45.026       05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:45.026       05:06:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.026       05:06:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:45.026       05:06:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.026     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:45.026     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:45.026    05:06:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=25.98
00:16:45.026    05:06:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 25.98
00:16:45.026    05:06:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:16:45.026   05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.98
00:16:45.026   05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.98 1
00:16:45.026  remove_attach_helper took 25.98s to complete (handling 1 nvme drive(s)) 05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.026   05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:45.026   05:06:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:45.026   05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true
00:16:45.026   05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0
00:16:45.027    05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true
00:16:45.027    05:06:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0
00:16:45.027    05:06:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]]
00:16:45.027    05:06:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec
00:16:45.027    05:06:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R
00:16:45.027     05:06:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true
00:16:45.027     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3
00:16:45.027     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6
00:16:45.027     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true
00:16:45.027     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs
00:16:45.027     05:06:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:51.592       05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.592  [2024-11-20 05:07:04.338510] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:51.592  [2024-11-20 05:07:04.340395] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:51.592  [2024-11-20 05:07:04.340611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:51.592  [2024-11-20 05:07:04.340758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:51.592  [2024-11-20 05:07:04.340829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:51.592  [2024-11-20 05:07:04.340937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:51.592  [2024-11-20 05:07:04.341072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:51.592  [2024-11-20 05:07:04.341219] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:51.592  [2024-11-20 05:07:04.341286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:51.592  [2024-11-20 05:07:04.341480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:51.592  [2024-11-20 05:07:04.341642] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:51.592  [2024-11-20 05:07:04.341828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:51.592  [2024-11-20 05:07:04.341988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 ))
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:51.592       05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:51.592      05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:51.592       05:07:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:51.592     05:07:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:51.592     05:07:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:51.592     05:07:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:51.592     05:07:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:16:58.162       05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:16:58.162  [2024-11-20 05:07:11.138578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:16:58.162  [2024-11-20 05:07:11.141482] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:58.162  [2024-11-20 05:07:11.141803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:16:58.162  [2024-11-20 05:07:11.142158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:58.162  [2024-11-20 05:07:11.142517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:58.162  [2024-11-20 05:07:11.142919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:16:58.162  [2024-11-20 05:07:11.143278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:58.162  [2024-11-20 05:07:11.143542] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:58.162  [2024-11-20 05:07:11.143810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:58.162       05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:16:58.162  [2024-11-20 05:07:11.144086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:16:58.162  [2024-11-20 05:07:11.144567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:16:58.162      05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:16:58.162  [2024-11-20 05:07:11.144786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:16:58.162  [2024-11-20 05:07:11.145006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:16:58.162       05:07:11 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:16:58.162     05:07:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:17:03.433     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:17:03.433     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:17:03.433      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:17:03.433      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:17:03.433       05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:17:03.433      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:17:03.433       05:07:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.433       05:07:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:17:03.433       05:07:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}"
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1
00:17:03.692  [2024-11-20 05:07:17.438660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state.
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs))
00:17:03.692  [2024-11-20 05:07:17.440810] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:03.692  [2024-11-20 05:07:17.440926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:03.692  [2024-11-20 05:07:17.441009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:03.692  [2024-11-20 05:07:17.441073] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:03.692  [2024-11-20 05:07:17.441275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:03.692  [2024-11-20 05:07:17.441343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:03.692      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs
00:17:03.692  [2024-11-20 05:07:17.441585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:03.692  [2024-11-20 05:07:17.441647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:03.692  [2024-11-20 05:07:17.441711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:03.692  [2024-11-20 05:07:17.441824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command
00:17:03.692  [2024-11-20 05:07:17.441918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 
00:17:03.692  [2024-11-20 05:07:17.442013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:17:03.692      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:17:03.692       05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:17:03.692      05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:17:03.692       05:07:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:03.692       05:07:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:17:03.692       05:07:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 ))
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}"
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo ''
00:17:03.692     05:07:17 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 6
00:17:10.255     05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true
00:17:10.255     05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs))
00:17:10.255      05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs
00:17:10.255      05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u
00:17:10.255       05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs
00:17:10.255      05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63
00:17:10.255       05:07:23 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable
00:17:10.255       05:07:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:17:10.255       05:07:23 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:17:10.255     05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\0\.\0 ]]
00:17:10.255     05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- ))
00:17:10.255    05:07:23 sw_hotplug -- common/autotest_common.sh@719 -- # time=25.44
00:17:10.255    05:07:23 sw_hotplug -- common/autotest_common.sh@720 -- # echo 25.44
00:17:10.255    05:07:23 sw_hotplug -- common/autotest_common.sh@722 -- # return 0
00:17:10.255   05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=25.44
00:17:10.255   05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 25.44 1
00:17:10.255  remove_attach_helper took 25.44s to complete (handling 1 nvme drive(s)) 05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT
00:17:10.255   05:07:23 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 137931
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 137931 ']'
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 137931
00:17:10.255    05:07:23 sw_hotplug -- common/autotest_common.sh@959 -- # uname
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']'
00:17:10.255    05:07:23 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137931
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']'
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137931'
00:17:10.255  killing process with pid 137931
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@973 -- # kill 137931
00:17:10.255   05:07:23 sw_hotplug -- common/autotest_common.sh@978 -- # wait 137931
00:17:10.255   05:07:24 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:17:10.824  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:17:10.824  0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver
00:17:11.760  ************************************
00:17:11.760  END TEST sw_hotplug
00:17:11.760  ************************************
00:17:11.760  
00:17:11.760  real	1m28.491s
00:17:11.760  user	1m2.805s
00:17:11.760  sys	0m15.758s
00:17:11.760   05:07:25 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable
00:17:11.760   05:07:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x
00:17:11.760   05:07:25  -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@260 -- # timing_exit lib
00:17:11.760   05:07:25  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:11.760   05:07:25  -- common/autotest_common.sh@10 -- # set +x
00:17:11.760   05:07:25  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:17:11.760   05:07:25  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:17:11.760   05:07:25  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:17:11.760   05:07:25  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:17:11.760   05:07:25  -- common/autotest_common.sh@726 -- # xtrace_disable
00:17:11.760   05:07:25  -- common/autotest_common.sh@10 -- # set +x
00:17:11.760   05:07:25  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:17:11.760   05:07:25  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:17:11.760   05:07:25  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:17:11.760   05:07:25  -- common/autotest_common.sh@10 -- # set +x
00:17:13.663  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:17:13.663  Waiting for block devices as requested
00:17:13.663  0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme
00:17:14.231  0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda1,mount@vda:vda15, so not binding PCI dev
00:17:14.231  Cleaning
00:17:14.231  Removing:    /var/run/dpdk/spdk0/config
00:17:14.231  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0
00:17:14.231  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1
00:17:14.231  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2
00:17:14.231  Removing:    /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3
00:17:14.231  Removing:    /var/run/dpdk/spdk0/fbarray_memzone
00:17:14.231  Removing:    /var/run/dpdk/spdk0/hugepage_info
00:17:14.231  Removing:    /dev/shm/spdk_tgt_trace.pid123498
00:17:14.231  Removing:    /var/run/dpdk/spdk0
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123308
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123498
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123736
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123836
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123869
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid123993
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124016
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124162
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124425
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124606
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124680
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124777
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124887
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid124971
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125019
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125062
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125145
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125256
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125770
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125826
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125879
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125900
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125975
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid125990
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126066
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126087
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126132
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126155
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126200
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126223
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126370
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126411
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126458
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126546
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126719
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126773
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid126811
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid127999
00:17:14.231  Removing:    /var/run/dpdk/spdk_pid128195
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128383
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128484
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128605
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128659
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128681
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid128712
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129178
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129262
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129365
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129411
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129564
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129600
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129647
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129693
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129833
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid129981
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130205
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130512
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130528
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130576
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130589
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130605
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130625
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130645
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130666
00:17:14.489  Removing:    /var/run/dpdk/spdk_pid130686
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130694
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130715
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130737
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130757
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130773
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130793
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130806
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130827
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130847
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130862
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130883
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130918
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130935
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid130970
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131053
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131080
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131101
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131127
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131148
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131154
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131208
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131226
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131251
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131272
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131284
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131294
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131306
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131316
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131328
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131345
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131378
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131412
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131432
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131459
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131480
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131483
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131540
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131547
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131583
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131604
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131609
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131626
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131638
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131648
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131660
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131672
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131764
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131821
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131949
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid131957
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132006
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132057
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132090
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132105
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132134
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132164
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132181
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132265
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132318
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132363
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132633
00:17:14.490  Removing:    /var/run/dpdk/spdk_pid132742
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132776
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132810
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132850
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132888
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132936
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid132975
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133074
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133144
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133182
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133425
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133523
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133622
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133660
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133700
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid133777
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134203
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134234
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134543
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134638
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134738
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134776
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134814
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid134836
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136208
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136331
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136335
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136367
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136877
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid136991
00:17:14.749  Removing:    /var/run/dpdk/spdk_pid137931
00:17:14.749  Clean
00:17:14.749   05:07:28  -- common/autotest_common.sh@1453 -- # return 0
00:17:14.749   05:07:28  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:17:14.749   05:07:28  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:14.749   05:07:28  -- common/autotest_common.sh@10 -- # set +x
00:17:14.749   05:07:28  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:17:14.749   05:07:28  -- common/autotest_common.sh@732 -- # xtrace_disable
00:17:14.749   05:07:28  -- common/autotest_common.sh@10 -- # set +x
00:17:15.008   05:07:28  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:17:15.008   05:07:28  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:17:15.008   05:07:28  -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log
00:17:15.008   05:07:28  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:17:15.008    05:07:28  -- spdk/autotest.sh@398 -- # hostname
00:17:15.008   05:07:28  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t ubuntu2204-cloud-1711172311-2200 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:17:15.267  geninfo: WARNING: invalid characters removed from testname!
00:17:54.068   05:08:07  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:00.647   05:08:13  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:03.180   05:08:16  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:05.713   05:08:19  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:08.999   05:08:22  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:12.284   05:08:25  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:18:15.571   05:08:29  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:18:15.571   05:08:29  -- spdk/autorun.sh@1 -- $ timing_finish
00:18:15.571   05:08:29  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:18:15.571   05:08:29  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:18:15.571   05:08:29  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:18:15.571   05:08:29  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:18:15.571  + [[ -n 2288 ]]
00:18:15.571  + sudo kill 2288
00:18:15.581  [Pipeline] }
00:18:15.600  [Pipeline] // timeout
00:18:15.605  [Pipeline] }
00:18:15.621  [Pipeline] // stage
00:18:15.627  [Pipeline] }
00:18:15.643  [Pipeline] // catchError
00:18:15.653  [Pipeline] stage
00:18:15.656  [Pipeline] { (Stop VM)
00:18:15.670  [Pipeline] sh
00:18:15.952  + vagrant halt
00:18:18.484  ==> default: Halting domain...
00:18:28.474  [Pipeline] sh
00:18:28.755  + vagrant destroy -f
00:18:32.046  ==> default: Removing domain...
00:18:32.058  [Pipeline] sh
00:18:32.340  + mv output /var/jenkins/workspace/ubuntu22-vg-autotest/output
00:18:32.349  [Pipeline] }
00:18:32.365  [Pipeline] // stage
00:18:32.370  [Pipeline] }
00:18:32.385  [Pipeline] // dir
00:18:32.390  [Pipeline] }
00:18:32.404  [Pipeline] // wrap
00:18:32.411  [Pipeline] }
00:18:32.424  [Pipeline] // catchError
00:18:32.434  [Pipeline] stage
00:18:32.436  [Pipeline] { (Epilogue)
00:18:32.449  [Pipeline] sh
00:18:32.731  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:18:50.897  [Pipeline] catchError
00:18:50.900  [Pipeline] {
00:18:50.912  [Pipeline] sh
00:18:51.194  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:18:51.194  Artifacts sizes are good
00:18:51.203  [Pipeline] }
00:18:51.217  [Pipeline] // catchError
00:18:51.228  [Pipeline] archiveArtifacts
00:18:51.235  Archiving artifacts
00:18:51.492  [Pipeline] cleanWs
00:18:51.502  [WS-CLEANUP] Deleting project workspace...
00:18:51.502  [WS-CLEANUP] Deferred wipeout is used...
00:18:51.507  [WS-CLEANUP] done
00:18:51.509  [Pipeline] }
00:18:51.524  [Pipeline] // stage
00:18:51.529  [Pipeline] }
00:18:51.542  [Pipeline] // node
00:18:51.547  [Pipeline] End of Pipeline
00:18:51.590  Finished: SUCCESS