00:00:00.001  Started by upstream project "autotest-per-patch" build number 132786
00:00:00.001  originally caused by:
00:00:00.001   Started by user sys_sgci
00:00:00.123  Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy
00:00:00.135  The recommended git tool is: git
00:00:00.135  using credential 00000000-0000-0000-0000-000000000002
00:00:00.138   > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10
00:00:00.184  Fetching changes from the remote Git repository
00:00:00.186   > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10
00:00:00.226  Using shallow fetch with depth 1
00:00:00.226  Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool
00:00:00.226   > git --version # timeout=10
00:00:00.256   > git --version # 'git version 2.39.2'
00:00:00.256  using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials
00:00:00.275  Setting http proxy: proxy-dmz.intel.com:911
00:00:00.275   > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5
00:00:07.479   > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10
00:00:07.491   > git rev-parse FETCH_HEAD^{commit} # timeout=10
00:00:07.505  Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD)
00:00:07.505   > git config core.sparsecheckout # timeout=10
00:00:07.518   > git read-tree -mu HEAD # timeout=10
00:00:07.536   > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5
00:00:07.562  Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag"
00:00:07.562   > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10
00:00:07.729  [Pipeline] Start of Pipeline
00:00:07.743  [Pipeline] library
00:00:07.745  Loading library shm_lib@master
00:00:07.745  Library shm_lib@master is cached. Copying from home.
00:00:07.760  [Pipeline] node
00:00:07.770  Running on VM-host-SM38 in /var/jenkins/workspace/freebsd-vg-autotest
00:00:07.772  [Pipeline] {
00:00:07.779  [Pipeline] catchError
00:00:07.781  [Pipeline] {
00:00:07.795  [Pipeline] wrap
00:00:07.804  [Pipeline] {
00:00:07.809  [Pipeline] stage
00:00:07.810  [Pipeline] { (Prologue)
00:00:07.835  [Pipeline] echo
00:00:07.837  Node: VM-host-SM38
00:00:07.841  [Pipeline] cleanWs
00:00:07.851  [WS-CLEANUP] Deleting project workspace...
00:00:07.851  [WS-CLEANUP] Deferred wipeout is used...
00:00:07.859  [WS-CLEANUP] done
00:00:08.096  [Pipeline] setCustomBuildProperty
00:00:08.183  [Pipeline] httpRequest
00:00:08.788  [Pipeline] echo
00:00:08.790  Sorcerer 10.211.164.112 is alive
00:00:08.799  [Pipeline] retry
00:00:08.800  [Pipeline] {
00:00:08.813  [Pipeline] httpRequest
00:00:08.818  HttpMethod: GET
00:00:08.819  URL: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.819  Sending request to url: http://10.211.164.112/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:08.823  Response Code: HTTP/1.1 200 OK
00:00:08.823  Success: Status code 200 is in the accepted range: 200,404
00:00:08.824  Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.321  [Pipeline] }
00:00:10.336  [Pipeline] // retry
00:00:10.343  [Pipeline] sh
00:00:10.629  + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz
00:00:10.651  [Pipeline] httpRequest
00:00:11.467  [Pipeline] echo
00:00:11.469  Sorcerer 10.211.164.112 is alive
00:00:11.480  [Pipeline] retry
00:00:11.482  [Pipeline] {
00:00:11.498  [Pipeline] httpRequest
00:00:11.504  HttpMethod: GET
00:00:11.505  URL: http://10.211.164.112/packages/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz
00:00:11.505  Sending request to url: http://10.211.164.112/packages/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz
00:00:11.529  Response Code: HTTP/1.1 200 OK
00:00:11.529  Success: Status code 200 is in the accepted range: 200,404
00:00:11.530  Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz
00:01:01.921  [Pipeline] }
00:01:01.939  [Pipeline] // retry
00:01:01.947  [Pipeline] sh
00:01:02.231  + tar --no-same-owner -xf spdk_51286f61aaf59ec518c0dd799e2c2ab48c22befd.tar.gz
00:01:05.599  [Pipeline] sh
00:01:05.883  + git -C spdk log --oneline -n5
00:01:05.883  51286f61a bdev: simplify bdev_reset_freeze_channel
00:01:05.883  a2f5e1c2d blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails
00:01:05.883  0f59982b6 blob: don't use bs_load_ctx_fail in bs_write_used_* functions
00:01:05.883  0354bb8e8 nvme/rdma: Force qp disconnect on pg remove
00:01:05.883  0ea9ac02f accel/mlx5: Create pool of UMRs
00:01:05.904  [Pipeline] writeFile
00:01:05.919  [Pipeline] sh
00:01:06.203  + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh
00:01:06.215  [Pipeline] sh
00:01:06.502  + cat autorun-spdk.conf
00:01:06.502  SPDK_TEST_UNITTEST=1
00:01:06.502  SPDK_RUN_VALGRIND=0
00:01:06.502  SPDK_RUN_FUNCTIONAL_TEST=1
00:01:06.502  SPDK_TEST_NVME=1
00:01:06.502  SPDK_TEST_BLOCKDEV=1
00:01:06.502  SPDK_TEST_RELEASE_BUILD=1
00:01:06.502  SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:06.511  RUN_NIGHTLY=0
00:01:06.513  [Pipeline] }
00:01:06.527  [Pipeline] // stage
00:01:06.542  [Pipeline] stage
00:01:06.544  [Pipeline] { (Run VM)
00:01:06.557  [Pipeline] sh
00:01:06.843  + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh
00:01:06.843  + echo 'Start stage prepare_nvme.sh'
00:01:06.843  Start stage prepare_nvme.sh
00:01:06.843  + [[ -n 4 ]]
00:01:06.843  + disk_prefix=ex4
00:01:06.843  + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]]
00:01:06.843  + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]]
00:01:06.843  + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf
00:01:06.843  ++ SPDK_TEST_UNITTEST=1
00:01:06.843  ++ SPDK_RUN_VALGRIND=0
00:01:06.843  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:01:06.843  ++ SPDK_TEST_NVME=1
00:01:06.843  ++ SPDK_TEST_BLOCKDEV=1
00:01:06.843  ++ SPDK_TEST_RELEASE_BUILD=1
00:01:06.843  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:01:06.843  ++ RUN_NIGHTLY=0
00:01:06.843  + cd /var/jenkins/workspace/freebsd-vg-autotest
00:01:06.843  + nvme_files=()
00:01:06.843  + declare -A nvme_files
00:01:06.843  + backend_dir=/var/lib/libvirt/images/backends
00:01:06.843  + nvme_files['nvme.img']=5G
00:01:06.843  + nvme_files['nvme-cmb.img']=5G
00:01:06.843  + nvme_files['nvme-multi0.img']=4G
00:01:06.843  + nvme_files['nvme-multi1.img']=4G
00:01:06.843  + nvme_files['nvme-multi2.img']=4G
00:01:06.843  + nvme_files['nvme-openstack.img']=8G
00:01:06.843  + nvme_files['nvme-zns.img']=5G
00:01:06.844  + ((  SPDK_TEST_NVME_PMR == 1  ))
00:01:06.844  + ((  SPDK_TEST_FTL == 1  ))
00:01:06.844  + ((  SPDK_TEST_NVME_FDP == 1  ))
00:01:06.844  + [[ ! -d /var/lib/libvirt/images/backends ]]
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G
00:01:06.844  Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc
00:01:06.844  + for nvme in "${!nvme_files[@]}"
00:01:06.844  + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G
00:01:07.105  Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc
00:01:07.105  ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu
00:01:07.105  + echo 'End stage prepare_nvme.sh'
00:01:07.105  End stage prepare_nvme.sh
00:01:07.118  [Pipeline] sh
00:01:07.404  + DISTRO=freebsd14
00:01:07.404  + CPUS=10
00:01:07.404  + RAM=14336
00:01:07.404  + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh
00:01:07.404  Setup: -n 10 -s 14336 -x  -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -H -a -v -f freebsd14
00:01:07.404  
00:01:07.404  DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant
00:01:07.404  SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk
00:01:07.404  VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest
00:01:07.404  HELP=0
00:01:07.404  DRY_RUN=0
00:01:07.404  NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,
00:01:07.404  NVME_DISKS_TYPE=nvme,
00:01:07.404  NVME_AUTO_CREATE=0
00:01:07.404  NVME_DISKS_NAMESPACES=,
00:01:07.404  NVME_CMB=,
00:01:07.404  NVME_PMR=,
00:01:07.404  NVME_ZNS=,
00:01:07.404  NVME_MS=,
00:01:07.404  NVME_FDP=,
00:01:07.404  SPDK_VAGRANT_DISTRO=freebsd14
00:01:07.404  SPDK_VAGRANT_VMCPU=10
00:01:07.404  SPDK_VAGRANT_VMRAM=14336
00:01:07.404  SPDK_VAGRANT_PROVIDER=libvirt
00:01:07.404  SPDK_VAGRANT_HTTP_PROXY=
00:01:07.404  SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64
00:01:07.404  SPDK_OPENSTACK_NETWORK=0
00:01:07.404  VAGRANT_PACKAGE_BOX=0
00:01:07.404  VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile
00:01:07.404  FORCE_DISTRO=true
00:01:07.404  VAGRANT_BOX_VERSION=
00:01:07.404  EXTRA_VAGRANTFILES=
00:01:07.404  NIC_MODEL=e1000
00:01:07.404  
00:01:07.404  mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt'
00:01:07.404  /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest
00:01:09.953  Bringing machine 'default' up with 'libvirt' provider...
00:01:10.220  ==> default: Creating image (snapshot of base box volume).
00:01:10.490  ==> default: Creating domain with the following settings...
00:01:10.490  ==> default:  -- Name:              freebsd14-14.0-RELEASE-1725281765-2372_default_1733739362_b112947525cc273a693a
00:01:10.490  ==> default:  -- Domain type:       kvm
00:01:10.490  ==> default:  -- Cpus:              10
00:01:10.490  ==> default:  -- Feature:           acpi
00:01:10.490  ==> default:  -- Feature:           apic
00:01:10.490  ==> default:  -- Feature:           pae
00:01:10.490  ==> default:  -- Memory:            14336M
00:01:10.490  ==> default:  -- Memory Backing:    hugepages: 
00:01:10.490  ==> default:  -- Management MAC:    
00:01:10.490  ==> default:  -- Loader:            
00:01:10.490  ==> default:  -- Nvram:             
00:01:10.490  ==> default:  -- Base box:          spdk/freebsd14
00:01:10.490  ==> default:  -- Storage pool:      default
00:01:10.490  ==> default:  -- Image:             /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1725281765-2372_default_1733739362_b112947525cc273a693a.img (32G)
00:01:10.490  ==> default:  -- Volume Cache:      default
00:01:10.490  ==> default:  -- Kernel:            
00:01:10.490  ==> default:  -- Initrd:            
00:01:10.490  ==> default:  -- Graphics Type:     vnc
00:01:10.490  ==> default:  -- Graphics Port:     -1
00:01:10.490  ==> default:  -- Graphics IP:       127.0.0.1
00:01:10.490  ==> default:  -- Graphics Password: Not defined
00:01:10.490  ==> default:  -- Video Type:        cirrus
00:01:10.490  ==> default:  -- Video VRAM:        9216
00:01:10.490  ==> default:  -- Sound Type:	
00:01:10.490  ==> default:  -- Keymap:            en-us
00:01:10.490  ==> default:  -- TPM Path:          
00:01:10.490  ==> default:  -- INPUT:             type=mouse, bus=ps2
00:01:10.490  ==> default:  -- Command line args: 
00:01:10.490  ==> default:     -> value=-device, 
00:01:10.490  ==> default:     -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 
00:01:10.490  ==> default:     -> value=-drive, 
00:01:10.490  ==> default:     -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 
00:01:10.490  ==> default:     -> value=-device, 
00:01:10.490  ==> default:     -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 
00:01:10.752  ==> default: Creating shared folders metadata...
00:01:10.752  ==> default: Starting domain.
00:01:12.140  ==> default: Waiting for domain to get an IP address...
00:01:34.111  ==> default: Waiting for SSH to become available...
00:01:46.333  ==> default: Configuring and enabling network interfaces...
00:01:56.406  ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk
00:02:18.444  ==> default: Mounting SSHFS shared folder...
00:02:18.444  ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output
00:02:18.444  ==> default: Checking Mount..
00:02:19.010  ==> default: Folder Successfully Mounted!
00:02:19.010  
00:02:19.010    SUCCESS!
00:02:19.010  
00:02:19.010    cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt and type "vagrant ssh" to use.
00:02:19.010    Use vagrant "suspend" and vagrant "resume" to stop and start.
00:02:19.010    Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt" to destroy all trace of vm.
00:02:19.010  
00:02:19.019  [Pipeline] }
00:02:19.034  [Pipeline] // stage
00:02:19.043  [Pipeline] dir
00:02:19.044  Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt
00:02:19.046  [Pipeline] {
00:02:19.058  [Pipeline] catchError
00:02:19.059  [Pipeline] {
00:02:19.083  [Pipeline] sh
00:02:19.362  + vagrant ssh-config --host vagrant
00:02:19.362  + tee ssh_conf
00:02:19.362  + sed -ne '/^Host/,$p'
00:02:21.892  Host vagrant
00:02:21.892    HostName 192.168.121.209
00:02:21.892    User vagrant
00:02:21.892    Port 22
00:02:21.892    UserKnownHostsFile /dev/null
00:02:21.892    StrictHostKeyChecking no
00:02:21.892    PasswordAuthentication no
00:02:21.892    IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1725281765-2372/libvirt/freebsd14
00:02:21.892    IdentitiesOnly yes
00:02:21.892    LogLevel FATAL
00:02:21.892    ForwardAgent yes
00:02:21.892    ForwardX11 yes
00:02:21.893  
00:02:21.905  [Pipeline] withEnv
00:02:21.907  [Pipeline] {
00:02:21.920  [Pipeline] sh
00:02:22.212  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash
00:02:22.212  		source /etc/os-release
00:02:22.212  		[[ -e /image.version ]] && img=$(< /image.version)
00:02:22.212  		# Minimal, systemd-like check.
00:02:22.212  		if [[ -e /.dockerenv ]]; then
00:02:22.212  			# Clear garbage from the node'\''s name:
00:02:22.212  			#  agt-er_autotest_547-896 -> autotest_547-896
00:02:22.212  			#  $HOSTNAME is the actual container id
00:02:22.212  			agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_}
00:02:22.212  			if grep -q "/etc/hostname" /proc/self/mountinfo; then
00:02:22.212  				# We can assume this is a mount from a host where container is running,
00:02:22.212  				# so fetch its hostname to easily identify the target swarm worker.
00:02:22.212  				container="$(< /etc/hostname) ($agent)"
00:02:22.212  			else
00:02:22.212  				# Fallback
00:02:22.212  				container=$agent
00:02:22.212  			fi
00:02:22.212  		fi
00:02:22.212  		echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}"
00:02:22.212  '
00:02:22.224  [Pipeline] }
00:02:22.240  [Pipeline] // withEnv
00:02:22.249  [Pipeline] setCustomBuildProperty
00:02:22.264  [Pipeline] stage
00:02:22.266  [Pipeline] { (Tests)
00:02:22.282  [Pipeline] sh
00:02:22.561  + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./
00:02:22.574  [Pipeline] sh
00:02:22.852  + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./
00:02:23.120  [Pipeline] timeout
00:02:23.120  Timeout set to expire in 1 hr 30 min
00:02:23.121  [Pipeline] {
00:02:23.133  [Pipeline] sh
00:02:23.409  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard'
00:02:23.975  HEAD is now at 51286f61a bdev: simplify bdev_reset_freeze_channel
00:02:23.987  [Pipeline] sh
00:02:24.262  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo'
00:02:24.270  [Pipeline] sh
00:02:24.539  + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo
00:02:24.554  [Pipeline] sh
00:02:24.832  + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo'
00:02:24.832  ++ readlink -f spdk_repo
00:02:24.832  + DIR_ROOT=/home/vagrant/spdk_repo
00:02:24.832  + [[ -n /home/vagrant/spdk_repo ]]
00:02:24.832  + DIR_SPDK=/home/vagrant/spdk_repo/spdk
00:02:24.832  + DIR_OUTPUT=/home/vagrant/spdk_repo/output
00:02:24.832  + [[ -d /home/vagrant/spdk_repo/spdk ]]
00:02:24.832  + [[ ! -d /home/vagrant/spdk_repo/output ]]
00:02:24.832  + [[ -d /home/vagrant/spdk_repo/output ]]
00:02:24.832  + [[ freebsd-vg-autotest == pkgdep-* ]]
00:02:24.832  + cd /home/vagrant/spdk_repo
00:02:24.832  + source /etc/os-release
00:02:24.832  ++ NAME=FreeBSD
00:02:24.832  ++ VERSION=14.0-RELEASE
00:02:24.832  ++ VERSION_ID=14.0
00:02:24.832  ++ ID=freebsd
00:02:24.832  ++ ANSI_COLOR='0;31'
00:02:24.832  ++ PRETTY_NAME='FreeBSD 14.0-RELEASE'
00:02:24.832  ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0
00:02:24.832  ++ HOME_URL=https://FreeBSD.org/
00:02:24.832  ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/
00:02:24.832  + uname -a
00:02:24.832  FreeBSD freebsd-cloud-1725281765-2372.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023     root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64
00:02:24.832  + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:02:25.090  Contigmem (not present)
00:02:25.090  Buffer Size: not set
00:02:25.090  Num Buffers: not set
00:02:25.090  
00:02:25.090  
00:02:25.090  Type     BDF             Vendor Device Driver          
00:02:25.090  NVMe     0:16:0          0x1b36 0x0010 nvme0           
00:02:25.090  + rm -f /tmp/spdk-ld-path
00:02:25.090  + source autorun-spdk.conf
00:02:25.090  ++ SPDK_TEST_UNITTEST=1
00:02:25.090  ++ SPDK_RUN_VALGRIND=0
00:02:25.090  ++ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:25.090  ++ SPDK_TEST_NVME=1
00:02:25.090  ++ SPDK_TEST_BLOCKDEV=1
00:02:25.090  ++ SPDK_TEST_RELEASE_BUILD=1
00:02:25.090  ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:25.090  ++ RUN_NIGHTLY=0
00:02:25.090  + ((  SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1  ))
00:02:25.090  + [[ -n '' ]]
00:02:25.090  + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk
00:02:25.090  + for M in /var/spdk/build-*-manifest.txt
00:02:25.090  + [[ -f /var/spdk/build-pkg-manifest.txt ]]
00:02:25.090  + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/
00:02:25.090  + for M in /var/spdk/build-*-manifest.txt
00:02:25.090  + [[ -f /var/spdk/build-repo-manifest.txt ]]
00:02:25.090  + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/
00:02:25.090  ++ uname
00:02:25.090  + [[ FreeBSD == \L\i\n\u\x ]]
00:02:25.090  + dmesg_pid=1306
00:02:25.090  + tail -F /var/log/messages
00:02:25.090  + [[ FreeBSD == FreeBSD ]]
00:02:25.090  + export LC_ALL=C LC_CTYPE=C
00:02:25.090  + LC_ALL=C
00:02:25.090  + LC_CTYPE=C
00:02:25.090  + export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:25.090  + UNBIND_ENTIRE_IOMMU_GROUP=yes
00:02:25.090  + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]]
00:02:25.090  + [[ -x /usr/src/fio-static/fio ]]
00:02:25.090  + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]]
00:02:25.090  + [[ ! -v VFIO_QEMU_BIN ]]
00:02:25.090  + [[ -e /usr/local/qemu/vfio-user-latest ]]
00:02:25.090  + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64)
00:02:25.090  + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:02:25.090  + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:02:25.090  + [[ -e /usr/local/qemu/vanilla-latest ]]
00:02:25.090  + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:25.090    10:17:17  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:02:25.090   10:17:17  -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_TEST_UNITTEST=1
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_VALGRIND=0
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_FUNCTIONAL_TEST=1
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_NVME=1
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_BLOCKDEV=1
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_RELEASE_BUILD=1
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi
00:02:25.090    10:17:17  -- spdk_repo/autorun-spdk.conf@8 -- $ RUN_NIGHTLY=0
00:02:25.090   10:17:17  -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT
00:02:25.090   10:17:17  -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:02:25.348     10:17:17  -- common/autotest_common.sh@1710 -- $ [[ n == y ]]
00:02:25.348    10:17:17  -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:02:25.348     10:17:17  -- scripts/common.sh@15 -- $ shopt -s extglob
00:02:25.348     10:17:17  -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]]
00:02:25.348     10:17:17  -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:02:25.348     10:17:17  -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh
00:02:25.348      10:17:17  -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:02:25.348      10:17:17  -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:02:25.348      10:17:17  -- paths/export.sh@4 -- $ export PATH
00:02:25.348      10:17:17  -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:02:25.348    10:17:17  -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output
00:02:25.349      10:17:17  -- common/autobuild_common.sh@493 -- $ date +%s
00:02:25.349     10:17:17  -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733739437.XXXXXX
00:02:25.349    10:17:17  -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733739437.XXXXXX.b6uXCs4obV
00:02:25.349    10:17:17  -- common/autobuild_common.sh@495 -- $ [[ -n '' ]]
00:02:25.349    10:17:17  -- common/autobuild_common.sh@499 -- $ '[' -n '' ']'
00:02:25.349    10:17:17  -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/'
00:02:25.349    10:17:17  -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp'
00:02:25.349    10:17:17  -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs'
00:02:25.349     10:17:17  -- common/autobuild_common.sh@509 -- $ get_config_params
00:02:25.349     10:17:17  -- common/autotest_common.sh@409 -- $ xtrace_disable
00:02:25.349     10:17:17  -- common/autotest_common.sh@10 -- $ set +x
00:02:25.349    10:17:17  -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-coverage'
00:02:25.349    10:17:17  -- common/autobuild_common.sh@511 -- $ start_monitor_resources
00:02:25.349    10:17:17  -- pm/common@17 -- $ local monitor
00:02:25.349    10:17:17  -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:02:25.349    10:17:17  -- pm/common@25 -- $ sleep 1
00:02:25.349     10:17:17  -- pm/common@21 -- $ date +%s
00:02:25.349    10:17:17  -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1733739437
00:02:25.349  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1733739437_collect-vmstat.pm.log
00:02:26.369    10:17:18  -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT
00:02:26.369   10:17:18  -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD=
00:02:26.369   10:17:18  -- spdk/autobuild.sh@12 -- $ umask 022
00:02:26.369   10:17:18  -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk
00:02:26.369   10:17:18  -- spdk/autobuild.sh@16 -- $ date -u
00:02:26.369  Mon Dec  9 10:17:18 UTC 2024
00:02:26.369   10:17:18  -- spdk/autobuild.sh@17 -- $ git describe --tags
00:02:26.369  v25.01-pre-312-g51286f61a
00:02:26.369   10:17:18  -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']'
00:02:26.369   10:17:18  -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']'
00:02:26.369   10:17:18  -- spdk/autobuild.sh@27 -- $ '[' -n '' ']'
00:02:26.369   10:17:18  -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in
00:02:26.369   10:17:18  -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]]
00:02:26.369   10:17:18  -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]]
00:02:26.369   10:17:18  -- spdk/autobuild.sh@55 -- $ [[ -n '' ]]
00:02:26.369   10:17:18  -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]]
00:02:26.369   10:17:18  -- spdk/autobuild.sh@58 -- $ unittest_build
00:02:26.369   10:17:18  -- common/autobuild_common.sh@433 -- $ run_test unittest_build _unittest_build
00:02:26.369   10:17:18  -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']'
00:02:26.369   10:17:18  -- common/autotest_common.sh@1111 -- $ xtrace_disable
00:02:26.369   10:17:18  -- common/autotest_common.sh@10 -- $ set +x
00:02:26.369  ************************************
00:02:26.369  START TEST unittest_build
00:02:26.369  ************************************
00:02:26.369   10:17:18 unittest_build -- common/autotest_common.sh@1129 -- $ _unittest_build
00:02:26.369   10:17:18 unittest_build -- common/autobuild_common.sh@424 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-coverage --without-shared
00:02:26.627  Notice: Vhost, rte_vhost library, virtio, and fuse
00:02:26.627  are only supported on Linux. Turning off default feature.
00:02:26.627  Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:02:26.627  Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build
00:02:26.885  RDMA_OPTION_ID_ACK_TIMEOUT is not supported
00:02:26.885  Using 'verbs' RDMA provider
00:02:35.559  Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done.
00:02:43.662  Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done.
00:02:43.662  Creating mk/config.mk...done.
00:02:43.662  Creating mk/cc.flags.mk...done.
00:02:43.662  Type 'gmake' to build.
00:02:43.662   10:17:35 unittest_build -- common/autobuild_common.sh@425 -- $ gmake -j10
00:02:43.662  gmake[1]: Nothing to be done for 'all'.
00:02:46.190  ps: stdin: not a terminal
00:02:49.470  The Meson build system
00:02:49.470  Version: 1.4.1
00:02:49.470  Source dir: /home/vagrant/spdk_repo/spdk/dpdk
00:02:49.470  Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:02:49.470  Build type: native build
00:02:49.470  Program cat found: YES (/bin/cat)
00:02:49.470  Project name: DPDK
00:02:49.470  Project version: 24.03.0
00:02:49.470  C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)")
00:02:49.470  C linker for the host machine: /usr/bin/clang ld.lld 16.0.6
00:02:49.470  Host machine cpu family: x86_64
00:02:49.470  Host machine cpu: x86_64
00:02:49.470  Message: ## Building in Developer Mode ##
00:02:49.470  Program pkg-config found: YES (/usr/local/bin/pkg-config)
00:02:49.470  Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh)
00:02:49.470  Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh)
00:02:49.470  Program python3 found: YES (/usr/local/bin/python3.11)
00:02:49.470  Program cat found: YES (/bin/cat)
00:02:49.470  Compiler for C supports arguments -march=native: YES 
00:02:49.470  Checking for size of "void *" : 8 
00:02:49.470  Checking for size of "void *" : 8 (cached)
00:02:49.470  Compiler for C supports link arguments -Wl,--undefined-version: YES 
00:02:49.470  Library m found: YES
00:02:49.470  Library numa found: NO
00:02:49.470  Library fdt found: NO
00:02:49.470  Library execinfo found: YES
00:02:49.470  Has header "execinfo.h" : YES 
00:02:49.470  Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0
00:02:49.470  Run-time dependency libarchive found: NO (tried pkgconfig)
00:02:49.470  Run-time dependency libbsd found: NO (tried pkgconfig)
00:02:49.470  Run-time dependency jansson found: NO (tried pkgconfig)
00:02:49.470  Run-time dependency openssl found: YES 3.0.14
00:02:49.470  Run-time dependency libpcap found: NO (tried pkgconfig)
00:02:49.470  Library pcap found: YES
00:02:49.470  Has header "pcap.h" with dependency -lpcap: YES 
00:02:49.470  Compiler for C supports arguments -Wcast-qual: YES 
00:02:49.470  Compiler for C supports arguments -Wdeprecated: YES 
00:02:49.470  Compiler for C supports arguments -Wformat: YES 
00:02:49.470  Compiler for C supports arguments -Wformat-nonliteral: YES 
00:02:49.470  Compiler for C supports arguments -Wformat-security: YES 
00:02:49.470  Compiler for C supports arguments -Wmissing-declarations: YES 
00:02:49.470  Compiler for C supports arguments -Wmissing-prototypes: YES 
00:02:49.470  Compiler for C supports arguments -Wnested-externs: YES 
00:02:49.470  Compiler for C supports arguments -Wold-style-definition: YES 
00:02:49.470  Compiler for C supports arguments -Wpointer-arith: YES 
00:02:49.470  Compiler for C supports arguments -Wsign-compare: YES 
00:02:49.470  Compiler for C supports arguments -Wstrict-prototypes: YES 
00:02:49.470  Compiler for C supports arguments -Wundef: YES 
00:02:49.470  Compiler for C supports arguments -Wwrite-strings: YES 
00:02:49.470  Compiler for C supports arguments -Wno-address-of-packed-member: YES 
00:02:49.470  Compiler for C supports arguments -Wno-packed-not-aligned: NO 
00:02:49.470  Compiler for C supports arguments -Wno-missing-field-initializers: YES 
00:02:49.470  Compiler for C supports arguments -mavx512f: YES 
00:02:49.470  Checking if "AVX512 checking" compiles: YES 
00:02:49.470  Fetching value of define "__SSE4_2__" : 1 
00:02:49.470  Fetching value of define "__AES__" : 1 
00:02:49.470  Fetching value of define "__AVX__" : 1 
00:02:49.470  Fetching value of define "__AVX2__" : 1 
00:02:49.470  Fetching value of define "__AVX512BW__" : 1 
00:02:49.470  Fetching value of define "__AVX512CD__" : 1 
00:02:49.470  Fetching value of define "__AVX512DQ__" : 1 
00:02:49.470  Fetching value of define "__AVX512F__" : 1 
00:02:49.470  Fetching value of define "__AVX512VL__" : 1 
00:02:49.470  Fetching value of define "__PCLMUL__" : 1 
00:02:49.470  Fetching value of define "__RDRND__" : 1 
00:02:49.470  Fetching value of define "__RDSEED__" : 1 
00:02:49.470  Fetching value of define "__VPCLMULQDQ__" : 1 
00:02:49.470  Fetching value of define "__znver1__" : (undefined) 
00:02:49.470  Fetching value of define "__znver2__" : (undefined) 
00:02:49.470  Fetching value of define "__znver3__" : (undefined) 
00:02:49.470  Fetching value of define "__znver4__" : (undefined) 
00:02:49.470  Compiler for C supports arguments -Wno-format-truncation: NO 
00:02:49.470  Message: lib/log: Defining dependency "log"
00:02:49.470  Message: lib/kvargs: Defining dependency "kvargs"
00:02:49.470  Message: lib/telemetry: Defining dependency "telemetry"
00:02:49.470  Checking if "Detect argument count for CPU_OR" compiles: YES 
00:02:49.470  Checking for function "getentropy" : YES 
00:02:49.470  Message: lib/eal: Defining dependency "eal"
00:02:49.470  Message: lib/ring: Defining dependency "ring"
00:02:49.470  Message: lib/rcu: Defining dependency "rcu"
00:02:49.470  Message: lib/mempool: Defining dependency "mempool"
00:02:49.470  Message: lib/mbuf: Defining dependency "mbuf"
00:02:49.470  Fetching value of define "__PCLMUL__" : 1 (cached)
00:02:49.470  Fetching value of define "__AVX512F__" : 1 (cached)
00:02:49.470  Fetching value of define "__AVX512BW__" : 1 (cached)
00:02:49.470  Fetching value of define "__AVX512DQ__" : 1 (cached)
00:02:49.470  Fetching value of define "__AVX512VL__" : 1 (cached)
00:02:49.470  Fetching value of define "__VPCLMULQDQ__" : 1 (cached)
00:02:49.470  Compiler for C supports arguments -mpclmul: YES 
00:02:49.470  Compiler for C supports arguments -maes: YES 
00:02:49.470  Compiler for C supports arguments -mavx512f: YES (cached)
00:02:49.470  Compiler for C supports arguments -mavx512bw: YES 
00:02:49.470  Compiler for C supports arguments -mavx512dq: YES 
00:02:49.470  Compiler for C supports arguments -mavx512vl: YES 
00:02:49.470  Compiler for C supports arguments -mvpclmulqdq: YES 
00:02:49.470  Compiler for C supports arguments -mavx2: YES 
00:02:49.470  Compiler for C supports arguments -mavx: YES 
00:02:49.470  Message: lib/net: Defining dependency "net"
00:02:49.470  Message: lib/meter: Defining dependency "meter"
00:02:49.470  Message: lib/ethdev: Defining dependency "ethdev"
00:02:49.470  Message: lib/pci: Defining dependency "pci"
00:02:49.470  Message: lib/cmdline: Defining dependency "cmdline"
00:02:49.470  Message: lib/hash: Defining dependency "hash"
00:02:49.470  Message: lib/timer: Defining dependency "timer"
00:02:49.470  Message: lib/compressdev: Defining dependency "compressdev"
00:02:49.470  Message: lib/cryptodev: Defining dependency "cryptodev"
00:02:49.470  Message: lib/dmadev: Defining dependency "dmadev"
00:02:49.470  Compiler for C supports arguments -Wno-cast-qual: YES 
00:02:49.470  Message: lib/reorder: Defining dependency "reorder"
00:02:49.470  Message: lib/security: Defining dependency "security"
00:02:49.470  Has header "linux/userfaultfd.h" : NO 
00:02:49.470  Has header "linux/vduse.h" : NO 
00:02:49.470  Compiler for C supports arguments -Wno-format-truncation: NO (cached)
00:02:49.470  Message: drivers/bus/pci: Defining dependency "bus_pci"
00:02:49.470  Message: drivers/bus/vdev: Defining dependency "bus_vdev"
00:02:49.470  Message: drivers/mempool/ring: Defining dependency "mempool_ring"
00:02:49.470  Message: Disabling raw/* drivers: missing internal dependency "rawdev"
00:02:49.470  Message: Disabling regex/* drivers: missing internal dependency "regexdev"
00:02:49.470  Message: Disabling ml/* drivers: missing internal dependency "mldev"
00:02:49.470  Message: Disabling vdpa/* drivers: missing internal dependency "vhost"
00:02:49.470  Message: Disabling event/* drivers: missing internal dependency "eventdev"
00:02:49.470  Message: Disabling baseband/* drivers: missing internal dependency "bbdev"
00:02:49.470  Message: Disabling gpu/* drivers: missing internal dependency "gpudev"
00:02:49.470  Program doxygen found: YES (/usr/local/bin/doxygen)
00:02:49.470  Configuring doxy-api-html.conf using configuration
00:02:49.470  Configuring doxy-api-man.conf using configuration
00:02:49.470  Program mandb found: NO
00:02:49.470  Program sphinx-build found: NO
00:02:49.470  Configuring rte_build_config.h using configuration
00:02:49.470  Message: 
00:02:49.470  =================
00:02:49.470  Applications Enabled
00:02:49.470  =================
00:02:49.470  
00:02:49.470  apps:
00:02:49.470  	
00:02:49.470  
00:02:49.470  Message: 
00:02:49.470  =================
00:02:49.470  Libraries Enabled
00:02:49.470  =================
00:02:49.470  
00:02:49.471  libs:
00:02:49.471  	log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 
00:02:49.471  	net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 
00:02:49.471  	cryptodev, dmadev, reorder, security, 
00:02:49.471  
00:02:49.471  Message: 
00:02:49.471  ===============
00:02:49.471  Drivers Enabled
00:02:49.471  ===============
00:02:49.471  
00:02:49.471  common:
00:02:49.471  	
00:02:49.471  bus:
00:02:49.471  	pci, vdev, 
00:02:49.471  mempool:
00:02:49.471  	ring, 
00:02:49.471  dma:
00:02:49.471  	
00:02:49.471  net:
00:02:49.471  	
00:02:49.471  crypto:
00:02:49.471  	
00:02:49.471  compress:
00:02:49.471  	
00:02:49.471  
00:02:49.471  Message: 
00:02:49.471  =================
00:02:49.471  Content Skipped
00:02:49.471  =================
00:02:49.471  
00:02:49.471  apps:
00:02:49.471  	dumpcap:	explicitly disabled via build config
00:02:49.471  	graph:	explicitly disabled via build config
00:02:49.471  	pdump:	explicitly disabled via build config
00:02:49.471  	proc-info:	explicitly disabled via build config
00:02:49.471  	test-acl:	explicitly disabled via build config
00:02:49.471  	test-bbdev:	explicitly disabled via build config
00:02:49.471  	test-cmdline:	explicitly disabled via build config
00:02:49.471  	test-compress-perf:	explicitly disabled via build config
00:02:49.471  	test-crypto-perf:	explicitly disabled via build config
00:02:49.471  	test-dma-perf:	explicitly disabled via build config
00:02:49.471  	test-eventdev:	explicitly disabled via build config
00:02:49.471  	test-fib:	explicitly disabled via build config
00:02:49.471  	test-flow-perf:	explicitly disabled via build config
00:02:49.471  	test-gpudev:	explicitly disabled via build config
00:02:49.471  	test-mldev:	explicitly disabled via build config
00:02:49.471  	test-pipeline:	explicitly disabled via build config
00:02:49.471  	test-pmd:	explicitly disabled via build config
00:02:49.471  	test-regex:	explicitly disabled via build config
00:02:49.471  	test-sad:	explicitly disabled via build config
00:02:49.471  	test-security-perf:	explicitly disabled via build config
00:02:49.471  	
00:02:49.471  libs:
00:02:49.471  	argparse:	explicitly disabled via build config
00:02:49.471  	metrics:	explicitly disabled via build config
00:02:49.471  	acl:	explicitly disabled via build config
00:02:49.471  	bbdev:	explicitly disabled via build config
00:02:49.471  	bitratestats:	explicitly disabled via build config
00:02:49.471  	bpf:	explicitly disabled via build config
00:02:49.471  	cfgfile:	explicitly disabled via build config
00:02:49.471  	distributor:	explicitly disabled via build config
00:02:49.471  	efd:	explicitly disabled via build config
00:02:49.471  	eventdev:	explicitly disabled via build config
00:02:49.471  	dispatcher:	explicitly disabled via build config
00:02:49.471  	gpudev:	explicitly disabled via build config
00:02:49.471  	gro:	explicitly disabled via build config
00:02:49.471  	gso:	explicitly disabled via build config
00:02:49.471  	ip_frag:	explicitly disabled via build config
00:02:49.471  	jobstats:	explicitly disabled via build config
00:02:49.471  	latencystats:	explicitly disabled via build config
00:02:49.471  	lpm:	explicitly disabled via build config
00:02:49.471  	member:	explicitly disabled via build config
00:02:49.471  	pcapng:	explicitly disabled via build config
00:02:49.471  	power:	only supported on Linux
00:02:49.471  	rawdev:	explicitly disabled via build config
00:02:49.471  	regexdev:	explicitly disabled via build config
00:02:49.471  	mldev:	explicitly disabled via build config
00:02:49.471  	rib:	explicitly disabled via build config
00:02:49.471  	sched:	explicitly disabled via build config
00:02:49.471  	stack:	explicitly disabled via build config
00:02:49.471  	vhost:	only supported on Linux
00:02:49.471  	ipsec:	explicitly disabled via build config
00:02:49.471  	pdcp:	explicitly disabled via build config
00:02:49.471  	fib:	explicitly disabled via build config
00:02:49.471  	port:	explicitly disabled via build config
00:02:49.471  	pdump:	explicitly disabled via build config
00:02:49.471  	table:	explicitly disabled via build config
00:02:49.471  	pipeline:	explicitly disabled via build config
00:02:49.471  	graph:	explicitly disabled via build config
00:02:49.471  	node:	explicitly disabled via build config
00:02:49.471  	
00:02:49.471  drivers:
00:02:49.471  	common/cpt:	not in enabled drivers build config
00:02:49.471  	common/dpaax:	not in enabled drivers build config
00:02:49.471  	common/iavf:	not in enabled drivers build config
00:02:49.471  	common/idpf:	not in enabled drivers build config
00:02:49.471  	common/ionic:	not in enabled drivers build config
00:02:49.471  	common/mvep:	not in enabled drivers build config
00:02:49.471  	common/octeontx:	not in enabled drivers build config
00:02:49.471  	bus/auxiliary:	not in enabled drivers build config
00:02:49.471  	bus/cdx:	not in enabled drivers build config
00:02:49.471  	bus/dpaa:	not in enabled drivers build config
00:02:49.471  	bus/fslmc:	not in enabled drivers build config
00:02:49.471  	bus/ifpga:	not in enabled drivers build config
00:02:49.471  	bus/platform:	not in enabled drivers build config
00:02:49.471  	bus/uacce:	not in enabled drivers build config
00:02:49.471  	bus/vmbus:	not in enabled drivers build config
00:02:49.471  	common/cnxk:	not in enabled drivers build config
00:02:49.471  	common/mlx5:	not in enabled drivers build config
00:02:49.471  	common/nfp:	not in enabled drivers build config
00:02:49.471  	common/nitrox:	not in enabled drivers build config
00:02:49.471  	common/qat:	not in enabled drivers build config
00:02:49.471  	common/sfc_efx:	not in enabled drivers build config
00:02:49.471  	mempool/bucket:	not in enabled drivers build config
00:02:49.471  	mempool/cnxk:	not in enabled drivers build config
00:02:49.471  	mempool/dpaa:	not in enabled drivers build config
00:02:49.471  	mempool/dpaa2:	not in enabled drivers build config
00:02:49.471  	mempool/octeontx:	not in enabled drivers build config
00:02:49.471  	mempool/stack:	not in enabled drivers build config
00:02:49.471  	dma/cnxk:	not in enabled drivers build config
00:02:49.471  	dma/dpaa:	not in enabled drivers build config
00:02:49.471  	dma/dpaa2:	not in enabled drivers build config
00:02:49.471  	dma/hisilicon:	not in enabled drivers build config
00:02:49.471  	dma/idxd:	not in enabled drivers build config
00:02:49.471  	dma/ioat:	not in enabled drivers build config
00:02:49.471  	dma/skeleton:	not in enabled drivers build config
00:02:49.471  	net/af_packet:	not in enabled drivers build config
00:02:49.471  	net/af_xdp:	not in enabled drivers build config
00:02:49.471  	net/ark:	not in enabled drivers build config
00:02:49.471  	net/atlantic:	not in enabled drivers build config
00:02:49.471  	net/avp:	not in enabled drivers build config
00:02:49.471  	net/axgbe:	not in enabled drivers build config
00:02:49.471  	net/bnx2x:	not in enabled drivers build config
00:02:49.471  	net/bnxt:	not in enabled drivers build config
00:02:49.471  	net/bonding:	not in enabled drivers build config
00:02:49.471  	net/cnxk:	not in enabled drivers build config
00:02:49.471  	net/cpfl:	not in enabled drivers build config
00:02:49.471  	net/cxgbe:	not in enabled drivers build config
00:02:49.471  	net/dpaa:	not in enabled drivers build config
00:02:49.471  	net/dpaa2:	not in enabled drivers build config
00:02:49.471  	net/e1000:	not in enabled drivers build config
00:02:49.471  	net/ena:	not in enabled drivers build config
00:02:49.471  	net/enetc:	not in enabled drivers build config
00:02:49.471  	net/enetfec:	not in enabled drivers build config
00:02:49.471  	net/enic:	not in enabled drivers build config
00:02:49.471  	net/failsafe:	not in enabled drivers build config
00:02:49.471  	net/fm10k:	not in enabled drivers build config
00:02:49.471  	net/gve:	not in enabled drivers build config
00:02:49.471  	net/hinic:	not in enabled drivers build config
00:02:49.471  	net/hns3:	not in enabled drivers build config
00:02:49.471  	net/i40e:	not in enabled drivers build config
00:02:49.471  	net/iavf:	not in enabled drivers build config
00:02:49.471  	net/ice:	not in enabled drivers build config
00:02:49.471  	net/idpf:	not in enabled drivers build config
00:02:49.471  	net/igc:	not in enabled drivers build config
00:02:49.471  	net/ionic:	not in enabled drivers build config
00:02:49.471  	net/ipn3ke:	not in enabled drivers build config
00:02:49.471  	net/ixgbe:	not in enabled drivers build config
00:02:49.471  	net/mana:	not in enabled drivers build config
00:02:49.471  	net/memif:	not in enabled drivers build config
00:02:49.471  	net/mlx4:	not in enabled drivers build config
00:02:49.471  	net/mlx5:	not in enabled drivers build config
00:02:49.471  	net/mvneta:	not in enabled drivers build config
00:02:49.471  	net/mvpp2:	not in enabled drivers build config
00:02:49.471  	net/netvsc:	not in enabled drivers build config
00:02:49.471  	net/nfb:	not in enabled drivers build config
00:02:49.471  	net/nfp:	not in enabled drivers build config
00:02:49.471  	net/ngbe:	not in enabled drivers build config
00:02:49.471  	net/null:	not in enabled drivers build config
00:02:49.471  	net/octeontx:	not in enabled drivers build config
00:02:49.471  	net/octeon_ep:	not in enabled drivers build config
00:02:49.471  	net/pcap:	not in enabled drivers build config
00:02:49.471  	net/pfe:	not in enabled drivers build config
00:02:49.471  	net/qede:	not in enabled drivers build config
00:02:49.471  	net/ring:	not in enabled drivers build config
00:02:49.471  	net/sfc:	not in enabled drivers build config
00:02:49.471  	net/softnic:	not in enabled drivers build config
00:02:49.471  	net/tap:	not in enabled drivers build config
00:02:49.471  	net/thunderx:	not in enabled drivers build config
00:02:49.471  	net/txgbe:	not in enabled drivers build config
00:02:49.471  	net/vdev_netvsc:	not in enabled drivers build config
00:02:49.471  	net/vhost:	not in enabled drivers build config
00:02:49.471  	net/virtio:	not in enabled drivers build config
00:02:49.471  	net/vmxnet3:	not in enabled drivers build config
00:02:49.471  	raw/*:	missing internal dependency, "rawdev"
00:02:49.471  	crypto/armv8:	not in enabled drivers build config
00:02:49.471  	crypto/bcmfs:	not in enabled drivers build config
00:02:49.471  	crypto/caam_jr:	not in enabled drivers build config
00:02:49.471  	crypto/ccp:	not in enabled drivers build config
00:02:49.471  	crypto/cnxk:	not in enabled drivers build config
00:02:49.471  	crypto/dpaa_sec:	not in enabled drivers build config
00:02:49.471  	crypto/dpaa2_sec:	not in enabled drivers build config
00:02:49.471  	crypto/ipsec_mb:	not in enabled drivers build config
00:02:49.471  	crypto/mlx5:	not in enabled drivers build config
00:02:49.471  	crypto/mvsam:	not in enabled drivers build config
00:02:49.471  	crypto/nitrox:	not in enabled drivers build config
00:02:49.471  	crypto/null:	not in enabled drivers build config
00:02:49.471  	crypto/octeontx:	not in enabled drivers build config
00:02:49.471  	crypto/openssl:	not in enabled drivers build config
00:02:49.471  	crypto/scheduler:	not in enabled drivers build config
00:02:49.471  	crypto/uadk:	not in enabled drivers build config
00:02:49.471  	crypto/virtio:	not in enabled drivers build config
00:02:49.472  	compress/isal:	not in enabled drivers build config
00:02:49.472  	compress/mlx5:	not in enabled drivers build config
00:02:49.472  	compress/nitrox:	not in enabled drivers build config
00:02:49.472  	compress/octeontx:	not in enabled drivers build config
00:02:49.472  	compress/zlib:	not in enabled drivers build config
00:02:49.472  	regex/*:	missing internal dependency, "regexdev"
00:02:49.472  	ml/*:	missing internal dependency, "mldev"
00:02:49.472  	vdpa/*:	missing internal dependency, "vhost"
00:02:49.472  	event/*:	missing internal dependency, "eventdev"
00:02:49.472  	baseband/*:	missing internal dependency, "bbdev"
00:02:49.472  	gpu/*:	missing internal dependency, "gpudev"
00:02:49.472  	
00:02:49.472  
00:02:49.472  Build targets in project: 80
00:02:49.472  
00:02:49.472  DPDK 24.03.0
00:02:49.472  
00:02:49.472    User defined options
00:02:49.472      buildtype          : debug
00:02:49.472      default_library    : static
00:02:49.472      libdir             : lib
00:02:49.472      prefix             : /
00:02:49.472      c_args             : -fPIC -Werror 
00:02:49.472      c_link_args        : 
00:02:49.472      cpu_instruction_set: native
00:02:49.472      disable_apps       : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test
00:02:49.472      disable_libs       : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table
00:02:49.472      enable_docs        : false
00:02:49.472      enable_drivers     : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm
00:02:49.472      enable_kmods       : true
00:02:49.472      max_lcores         : 128
00:02:49.472      tests              : false
00:02:49.472  
00:02:49.472  Found ninja-1.11.1 at /usr/local/bin/ninja
00:02:49.730  ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp'
00:02:49.730  [1/232] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o
00:02:49.730  [2/232] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o
00:02:49.730  [3/232] Compiling C object lib/librte_log.a.p/log_log.c.o
00:02:49.988  [4/232] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o
00:02:49.988  [5/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o
00:02:49.988  [6/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o
00:02:49.988  [7/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o
00:02:49.988  [8/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o
00:02:49.988  [9/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o
00:02:49.988  [10/232] Linking static target lib/librte_kvargs.a
00:02:49.988  [11/232] Linking static target lib/librte_log.a
00:02:49.988  [12/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o
00:02:49.988  [13/232] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o
00:02:49.988  [14/232] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o
00:02:50.245  [15/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o
00:02:50.245  [16/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o
00:02:50.245  [17/232] Linking static target lib/librte_telemetry.a
00:02:50.245  [18/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o
00:02:50.245  [19/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o
00:02:50.245  [20/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o
00:02:50.245  [21/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o
00:02:50.245  [22/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o
00:02:50.503  [23/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o
00:02:50.503  [24/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o
00:02:50.503  [25/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o
00:02:50.503  [26/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o
00:02:50.762  [27/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o
00:02:50.762  [28/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o
00:02:50.762  [29/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o
00:02:50.762  [30/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o
00:02:50.762  [31/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o
00:02:50.762  [32/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o
00:02:50.762  [33/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o
00:02:50.762  [34/232] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o
00:02:50.762  [35/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o
00:02:50.762  [36/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o
00:02:50.762  [37/232] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o
00:02:51.019  [38/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o
00:02:51.019  [39/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o
00:02:51.019  [40/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o
00:02:51.019  [41/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o
00:02:51.019  [42/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o
00:02:51.019  [43/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o
00:02:51.019  [44/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o
00:02:51.277  [45/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o
00:02:51.277  [46/232] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o
00:02:51.277  [47/232] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o
00:02:51.277  [48/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o
00:02:51.277  [49/232] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o
00:02:51.277  [50/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o
00:02:51.535  [51/232] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o
00:02:51.535  [52/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o
00:02:51.535  [53/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o
00:02:51.535  [54/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o
00:02:51.535  [55/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o
00:02:51.535  [56/232] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o
00:02:51.535  [57/232] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o
00:02:51.794  [58/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o
00:02:51.794  [59/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o
00:02:51.794  [60/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o
00:02:51.794  [61/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o
00:02:51.794  [62/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o
00:02:51.794  [63/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o
00:02:51.794  [64/232] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o
00:02:51.794  [65/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o
00:02:51.794  [66/232] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o
00:02:52.051  [67/232] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o
00:02:52.051  [68/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o
00:02:52.051  [69/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o
00:02:52.051  [70/232] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o
00:02:52.051  [71/232] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o
00:02:52.051  [72/232] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o
00:02:52.308  [73/232] Linking static target lib/librte_eal.a
00:02:52.308  [74/232] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o
00:02:52.308  [75/232] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o
00:02:52.308  [76/232] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o
00:02:52.308  [77/232] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o
00:02:52.308  [78/232] Linking static target lib/librte_rcu.a
00:02:52.308  [79/232] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o
00:02:52.566  [80/232] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o
00:02:52.566  [81/232] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o
00:02:52.566  [82/232] Linking static target lib/librte_ring.a
00:02:52.566  [83/232] Linking static target lib/librte_mempool.a
00:02:52.566  [84/232] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o
00:02:52.566  [85/232] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o
00:02:52.566  [86/232] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o
00:02:52.566  [87/232] Linking static target lib/librte_mbuf.a
00:02:52.824  [88/232] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o
00:02:52.824  [89/232] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o
00:02:52.824  [90/232] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output)
00:02:52.824  [91/232] Compiling C object lib/librte_net.a.p/net_rte_net.c.o
00:02:52.824  [92/232] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o
00:02:52.824  [93/232] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.081  [94/232] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o
00:02:53.081  [95/232] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.081  [96/232] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o
00:02:53.081  [97/232] Linking static target lib/librte_meter.a
00:02:53.081  [98/232] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o
00:02:53.081  [99/232] Linking static target lib/librte_net.a
00:02:53.081  [100/232] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o
00:02:53.081  [101/232] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o
00:02:53.081  [102/232] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o
00:02:53.337  [103/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o
00:02:53.595  [104/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o
00:02:53.595  [105/232] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o
00:02:53.595  [106/232] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.595  [107/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o
00:02:53.853  [108/232] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.853  [109/232] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.853  [110/232] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output)
00:02:53.853  [111/232] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o
00:02:53.853  [112/232] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o
00:02:53.853  [113/232] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o
00:02:53.853  [114/232] Linking static target lib/librte_pci.a
00:02:53.853  [115/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o
00:02:54.110  [116/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o
00:02:54.110  [117/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o
00:02:54.110  [118/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o
00:02:54.110  [119/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o
00:02:54.110  [120/232] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o
00:02:54.110  [121/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o
00:02:54.110  [122/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o
00:02:54.110  [123/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o
00:02:54.110  [124/232] Linking target lib/librte_log.so.24.1
00:02:54.110  [125/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o
00:02:54.110  [126/232] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o
00:02:54.110  [127/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o
00:02:54.110  [128/232] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o
00:02:54.110  [129/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o
00:02:54.368  [130/232] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o
00:02:54.368  [131/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o
00:02:54.368  [132/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o
00:02:54.368  [133/232] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o
00:02:54.368  [134/232] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols
00:02:54.368  [135/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o
00:02:54.368  [136/232] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o
00:02:54.368  [137/232] Linking static target lib/librte_cmdline.a
00:02:54.368  [138/232] Linking static target lib/librte_ethdev.a
00:02:54.368  [139/232] Linking target lib/librte_telemetry.so.24.1
00:02:54.368  [140/232] Linking target lib/librte_kvargs.so.24.1
00:02:54.625  [141/232] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o
00:02:54.625  [142/232] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o
00:02:54.625  [143/232] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:54.625  [144/232] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols
00:02:54.625  [145/232] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols
00:02:54.625  [146/232] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o
00:02:54.883  [147/232] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o
00:02:54.883  [148/232] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o
00:02:54.883  [149/232] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o
00:02:54.883  [150/232] Linking static target lib/librte_hash.a
00:02:54.883  [151/232] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o
00:02:54.883  [152/232] Linking static target lib/librte_timer.a
00:02:54.883  [153/232] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o
00:02:54.883  [154/232] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o
00:02:54.883  [155/232] Linking static target lib/librte_compressdev.a
00:02:55.139  [156/232] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.139  [157/232] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o
00:02:55.140  [158/232] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o
00:02:55.140  [159/232] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o
00:02:55.140  [160/232] Linking static target lib/librte_dmadev.a
00:02:55.440  [161/232] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.440  [162/232] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o
00:02:55.440  [163/232] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o
00:02:55.440  [164/232] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o
00:02:55.440  [165/232] Linking static target lib/librte_reorder.a
00:02:55.440  [166/232] Linking static target lib/librte_cryptodev.a
00:02:55.440  [167/232] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.440  [168/232] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o
00:02:55.712  [169/232] Compiling C object lib/librte_security.a.p/security_rte_security.c.o
00:02:55.712  [170/232] Linking static target lib/librte_security.a
00:02:55.712  [171/232] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o
00:02:55.712  [172/232] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.712  [173/232] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o
00:02:55.712  [174/232] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.712  [175/232] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.712  [176/232] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o
00:02:55.712  [177/232] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.712  [178/232] Linking static target drivers/libtmp_rte_bus_pci.a
00:02:55.970  [179/232] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o
00:02:55.970  [180/232] Linking static target drivers/libtmp_rte_bus_vdev.a
00:02:55.970  [181/232] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.970  [182/232] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output)
00:02:55.970  [183/232] Generating drivers/rte_bus_pci.pmd.c with a custom command
00:02:55.970  [184/232] Generating drivers/rte_bus_vdev.pmd.c with a custom command
00:02:55.970  [185/232] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:55.970  [186/232] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o
00:02:55.970  [187/232] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:55.970  [188/232] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o
00:02:55.970  [189/232] Linking static target drivers/librte_bus_pci.a
00:02:55.970  [190/232] Linking static target drivers/librte_bus_vdev.a
00:02:55.970  [191/232] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o
00:02:55.970  [192/232] Linking static target drivers/libtmp_rte_mempool_ring.a
00:02:56.228  [193/232] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.228  [194/232] Generating drivers/rte_mempool_ring.pmd.c with a custom command
00:02:56.228  [195/232] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:56.228  [196/232] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o
00:02:56.228  [197/232] Linking static target drivers/librte_mempool_ring.a
00:02:56.228  [198/232] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.228  [199/232] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output)
00:02:56.794  [200/232] Generating kernel/freebsd/contigmem with a custom command
00:02:56.794  machine -> /usr/src/sys/amd64/include
00:02:56.794  x86 -> /usr/src/sys/x86/include
00:02:56.794  i386 -> /usr/src/sys/i386/include
00:02:56.794  awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h
00:02:56.794  awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h
00:02:56.794  awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h
00:02:56.794  touch opt_global.h
00:02:56.794  clang  -O2 -pipe -include rte_config.h  -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc  -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common  -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include     -MD  -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float  -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length   -mno-aes -mno-avx  -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o
00:02:56.794  ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r  -o contigmem.ko contigmem.o 
00:02:56.794  :> export_syms
00:02:56.794  awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko  export_syms | xargs -J% objcopy % contigmem.ko
00:02:56.794  objcopy --strip-debug contigmem.ko
00:02:57.052  [201/232] Generating kernel/freebsd/nic_uio with a custom command
00:02:57.052  clang  -O2 -pipe -include rte_config.h  -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc  -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common  -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include     -MD  -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float  -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length   -mno-aes -mno-avx  -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o
00:02:57.052  ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r  -o nic_uio.ko nic_uio.o 
00:02:57.052  :> export_syms
00:02:57.052  awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko  export_syms | xargs -J% objcopy % nic_uio.ko
00:02:57.052  objcopy --strip-debug nic_uio.ko
00:02:59.579  [202/232] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.164  [203/232] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output)
00:03:02.164  [204/232] Linking target lib/librte_eal.so.24.1
00:03:02.164  [205/232] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols
00:03:02.164  [206/232] Linking target lib/librte_ring.so.24.1
00:03:02.164  [207/232] Linking target lib/librte_timer.so.24.1
00:03:02.164  [208/232] Linking target lib/librte_pci.so.24.1
00:03:02.164  [209/232] Linking target drivers/librte_bus_vdev.so.24.1
00:03:02.164  [210/232] Linking target lib/librte_meter.so.24.1
00:03:02.164  [211/232] Linking target lib/librte_dmadev.so.24.1
00:03:02.164  [212/232] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols
00:03:02.164  [213/232] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols
00:03:02.164  [214/232] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols
00:03:02.164  [215/232] Linking target lib/librte_mempool.so.24.1
00:03:02.164  [216/232] Linking target lib/librte_rcu.so.24.1
00:03:02.164  [217/232] Linking target drivers/librte_bus_pci.so.24.1
00:03:02.164  [218/232] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols
00:03:02.164  [219/232] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols
00:03:02.164  [220/232] Linking target drivers/librte_mempool_ring.so.24.1
00:03:02.164  [221/232] Linking target lib/librte_mbuf.so.24.1
00:03:02.421  [222/232] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols
00:03:02.421  [223/232] Linking target lib/librte_reorder.so.24.1
00:03:02.421  [224/232] Linking target lib/librte_compressdev.so.24.1
00:03:02.421  [225/232] Linking target lib/librte_net.so.24.1
00:03:02.421  [226/232] Linking target lib/librte_cryptodev.so.24.1
00:03:02.421  [227/232] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols
00:03:02.421  [228/232] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols
00:03:02.421  [229/232] Linking target lib/librte_hash.so.24.1
00:03:02.421  [230/232] Linking target lib/librte_security.so.24.1
00:03:02.421  [231/232] Linking target lib/librte_cmdline.so.24.1
00:03:02.421  [232/232] Linking target lib/librte_ethdev.so.24.1
00:03:02.421  INFO: autodetecting backend as ninja
00:03:02.421  INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp
00:03:02.987    CC lib/log/log.o
00:03:02.987    CC lib/log/log_flags.o
00:03:02.987    CC lib/log/log_deprecated.o
00:03:02.987    CC lib/ut/ut.o
00:03:02.987    CC lib/ut_mock/mock.o
00:03:02.987    LIB libspdk_ut_mock.a
00:03:02.987    LIB libspdk_ut.a
00:03:02.987    LIB libspdk_log.a
00:03:03.244    CC lib/dma/dma.o
00:03:03.244    CC lib/ioat/ioat.o
00:03:03.244    CC lib/util/base64.o
00:03:03.244    CC lib/util/bit_array.o
00:03:03.244    CC lib/util/crc16.o
00:03:03.244    CC lib/util/cpuset.o
00:03:03.244    CC lib/util/crc32.o
00:03:03.244    CC lib/util/crc32c.o
00:03:03.244    CC lib/util/crc32_ieee.o
00:03:03.244    CXX lib/trace_parser/trace.o
00:03:03.244    CC lib/util/crc64.o
00:03:03.244    CC lib/util/dif.o
00:03:03.244    CC lib/util/fd.o
00:03:03.244    CC lib/util/fd_group.o
00:03:03.244    CC lib/util/file.o
00:03:03.244    CC lib/util/hexlify.o
00:03:03.244    LIB libspdk_dma.a
00:03:03.244    CC lib/util/iov.o
00:03:03.244    CC lib/util/math.o
00:03:03.244    LIB libspdk_ioat.a
00:03:03.244    CC lib/util/net.o
00:03:03.244    CC lib/util/pipe.o
00:03:03.244    CC lib/util/strerror_tls.o
00:03:03.244    CC lib/util/string.o
00:03:03.244    CC lib/util/uuid.o
00:03:03.244    CC lib/util/xor.o
00:03:03.244    CC lib/util/zipf.o
00:03:03.244    CC lib/util/md5.o
00:03:03.501    LIB libspdk_util.a
00:03:03.501    CC lib/conf/conf.o
00:03:03.501    CC lib/vmd/vmd.o
00:03:03.501    CC lib/vmd/led.o
00:03:03.501    CC lib/json/json_parse.o
00:03:03.501    CC lib/json/json_util.o
00:03:03.501    CC lib/json/json_write.o
00:03:03.501    CC lib/env_dpdk/env.o
00:03:03.501    CC lib/rdma_utils/rdma_utils.o
00:03:03.501    CC lib/idxd/idxd.o
00:03:03.501    CC lib/env_dpdk/memory.o
00:03:03.501    CC lib/idxd/idxd_user.o
00:03:03.501    LIB libspdk_conf.a
00:03:03.501    CC lib/env_dpdk/pci.o
00:03:03.501    CC lib/env_dpdk/init.o
00:03:03.501    LIB libspdk_rdma_utils.a
00:03:03.501    CC lib/env_dpdk/threads.o
00:03:03.501    LIB libspdk_json.a
00:03:03.501    CC lib/env_dpdk/pci_ioat.o
00:03:03.501    LIB libspdk_vmd.a
00:03:03.501    LIB libspdk_idxd.a
00:03:03.501    CC lib/env_dpdk/pci_virtio.o
00:03:03.501    CC lib/env_dpdk/pci_vmd.o
00:03:03.501    CC lib/rdma_provider/common.o
00:03:03.501    CC lib/rdma_provider/rdma_provider_verbs.o
00:03:03.759    CC lib/env_dpdk/pci_idxd.o
00:03:03.759    CC lib/env_dpdk/pci_event.o
00:03:03.759    CC lib/jsonrpc/jsonrpc_server.o
00:03:03.759    CC lib/env_dpdk/sigbus_handler.o
00:03:03.759    CC lib/jsonrpc/jsonrpc_server_tcp.o
00:03:03.759    CC lib/jsonrpc/jsonrpc_client.o
00:03:03.759    CC lib/jsonrpc/jsonrpc_client_tcp.o
00:03:03.759    CC lib/env_dpdk/pci_dpdk.o
00:03:03.759    CC lib/env_dpdk/pci_dpdk_2207.o
00:03:03.759    LIB libspdk_rdma_provider.a
00:03:03.759    CC lib/env_dpdk/pci_dpdk_2211.o
00:03:03.759    LIB libspdk_jsonrpc.a
00:03:03.759    CC lib/rpc/rpc.o
00:03:03.759    LIB libspdk_env_dpdk.a
00:03:04.017    LIB libspdk_rpc.a
00:03:04.017    CC lib/trace/trace.o
00:03:04.017    CC lib/trace/trace_rpc.o
00:03:04.017    CC lib/trace/trace_flags.o
00:03:04.017    CC lib/notify/notify.o
00:03:04.017    CC lib/notify/notify_rpc.o
00:03:04.017    CC lib/keyring/keyring.o
00:03:04.017    CC lib/keyring/keyring_rpc.o
00:03:04.017    LIB libspdk_notify.a
00:03:04.017    LIB libspdk_trace_parser.a
00:03:04.017    LIB libspdk_keyring.a
00:03:04.017    LIB libspdk_trace.a
00:03:04.017    CC lib/thread/thread.o
00:03:04.017    CC lib/sock/sock.o
00:03:04.017    CC lib/thread/iobuf.o
00:03:04.017    CC lib/sock/sock_rpc.o
00:03:04.274    LIB libspdk_sock.a
00:03:04.274    LIB libspdk_thread.a
00:03:04.274    CC lib/nvme/nvme_ctrlr_cmd.o
00:03:04.274    CC lib/nvme/nvme_ctrlr.o
00:03:04.274    CC lib/nvme/nvme_ns_cmd.o
00:03:04.275    CC lib/nvme/nvme_fabric.o
00:03:04.275    CC lib/nvme/nvme_ns.o
00:03:04.275    CC lib/nvme/nvme_pcie_common.o
00:03:04.275    CC lib/nvme/nvme_pcie.o
00:03:04.275    CC lib/nvme/nvme_qpair.o
00:03:04.275    CC lib/nvme/nvme.o
00:03:04.275    CC lib/nvme/nvme_quirks.o
00:03:04.532    CC lib/nvme/nvme_transport.o
00:03:04.790    CC lib/nvme/nvme_discovery.o
00:03:04.790    CC lib/accel/accel.o
00:03:04.790    CC lib/blob/blobstore.o
00:03:04.790    CC lib/init/json_config.o
00:03:04.790    CC lib/blob/request.o
00:03:04.790    CC lib/init/subsystem.o
00:03:04.790    CC lib/blob/zeroes.o
00:03:04.790    CC lib/nvme/nvme_ctrlr_ocssd_cmd.o
00:03:04.790    CC lib/accel/accel_rpc.o
00:03:04.790    CC lib/nvme/nvme_ns_ocssd_cmd.o
00:03:04.790    CC lib/init/subsystem_rpc.o
00:03:04.790    CC lib/accel/accel_sw.o
00:03:04.790    CC lib/blob/blob_bs_dev.o
00:03:04.790    CC lib/nvme/nvme_tcp.o
00:03:04.790    CC lib/init/rpc.o
00:03:04.790    LIB libspdk_accel.a
00:03:04.790    LIB libspdk_init.a
00:03:04.790    CC lib/nvme/nvme_opal.o
00:03:04.790    CC lib/nvme/nvme_io_msg.o
00:03:04.790    CC lib/nvme/nvme_poll_group.o
00:03:05.048    CC lib/nvme/nvme_zns.o
00:03:05.048    CC lib/event/app.o
00:03:05.048    CC lib/bdev/bdev.o
00:03:05.048    CC lib/event/reactor.o
00:03:05.048    CC lib/bdev/bdev_rpc.o
00:03:05.048    CC lib/event/log_rpc.o
00:03:05.048    CC lib/bdev/bdev_zone.o
00:03:05.048    LIB libspdk_blob.a
00:03:05.048    CC lib/nvme/nvme_stubs.o
00:03:05.048    CC lib/event/app_rpc.o
00:03:05.048    CC lib/nvme/nvme_auth.o
00:03:05.305    CC lib/bdev/part.o
00:03:05.305    CC lib/nvme/nvme_rdma.o
00:03:05.305    CC lib/event/scheduler_static.o
00:03:05.305    CC lib/bdev/scsi_nvme.o
00:03:05.305    LIB libspdk_event.a
00:03:05.305    CC lib/blobfs/blobfs.o
00:03:05.305    CC lib/blobfs/tree.o
00:03:05.305    CC lib/lvol/lvol.o
00:03:05.564    LIB libspdk_blobfs.a
00:03:05.564    LIB libspdk_bdev.a
00:03:05.564    LIB libspdk_lvol.a
00:03:05.564    LIB libspdk_nvme.a
00:03:05.564    CC lib/scsi/dev.o
00:03:05.564    CC lib/scsi/port.o
00:03:05.564    CC lib/scsi/scsi.o
00:03:05.564    CC lib/scsi/lun.o
00:03:05.564    CC lib/scsi/scsi_pr.o
00:03:05.564    CC lib/scsi/scsi_bdev.o
00:03:05.564    CC lib/scsi/scsi_rpc.o
00:03:05.564    CC lib/scsi/task.o
00:03:05.564    CC lib/nvmf/ctrlr.o
00:03:05.564    CC lib/nvmf/ctrlr_bdev.o
00:03:05.564    CC lib/nvmf/ctrlr_discovery.o
00:03:05.564    CC lib/nvmf/subsystem.o
00:03:05.564    CC lib/nvmf/nvmf.o
00:03:05.564    CC lib/nvmf/nvmf_rpc.o
00:03:05.835    CC lib/nvmf/transport.o
00:03:05.835    CC lib/nvmf/tcp.o
00:03:05.835    CC lib/nvmf/stubs.o
00:03:05.835    LIB libspdk_scsi.a
00:03:05.835    CC lib/nvmf/mdns_server.o
00:03:05.835    CC lib/nvmf/rdma.o
00:03:05.835    CC lib/nvmf/auth.o
00:03:05.835    CC lib/iscsi/conn.o
00:03:05.835    CC lib/iscsi/init_grp.o
00:03:05.835    CC lib/iscsi/iscsi.o
00:03:05.835    CC lib/iscsi/param.o
00:03:05.835    CC lib/iscsi/portal_grp.o
00:03:05.835    CC lib/iscsi/tgt_node.o
00:03:05.835    CC lib/iscsi/iscsi_subsystem.o
00:03:06.108    CC lib/iscsi/iscsi_rpc.o
00:03:06.108    CC lib/iscsi/task.o
00:03:06.108    LIB libspdk_nvmf.a
00:03:06.108    LIB libspdk_iscsi.a
00:03:06.366    CC module/env_dpdk/env_dpdk_rpc.o
00:03:06.366    CC module/accel/error/accel_error.o
00:03:06.366    CC module/accel/error/accel_error_rpc.o
00:03:06.366    CC module/blob/bdev/blob_bdev.o
00:03:06.366    CC module/accel/iaa/accel_iaa.o
00:03:06.366    CC module/accel/dsa/accel_dsa.o
00:03:06.366    CC module/keyring/file/keyring.o
00:03:06.366    CC module/sock/posix/posix.o
00:03:06.366    CC module/accel/ioat/accel_ioat.o
00:03:06.366    CC module/scheduler/dynamic/scheduler_dynamic.o
00:03:06.366    LIB libspdk_env_dpdk_rpc.a
00:03:06.366    CC module/keyring/file/keyring_rpc.o
00:03:06.366    CC module/accel/iaa/accel_iaa_rpc.o
00:03:06.366    CC module/accel/dsa/accel_dsa_rpc.o
00:03:06.366    LIB libspdk_accel_error.a
00:03:06.366    CC module/accel/ioat/accel_ioat_rpc.o
00:03:06.366    LIB libspdk_blob_bdev.a
00:03:06.366    LIB libspdk_scheduler_dynamic.a
00:03:06.366    LIB libspdk_accel_iaa.a
00:03:06.366    LIB libspdk_accel_dsa.a
00:03:06.366    LIB libspdk_keyring_file.a
00:03:06.366    LIB libspdk_accel_ioat.a
00:03:06.643    CC module/bdev/gpt/gpt.o
00:03:06.643    CC module/bdev/error/vbdev_error.o
00:03:06.643    CC module/bdev/delay/vbdev_delay.o
00:03:06.643    CC module/bdev/null/bdev_null.o
00:03:06.643    CC module/bdev/malloc/bdev_malloc.o
00:03:06.643    CC module/bdev/nvme/bdev_nvme.o
00:03:06.643    CC module/blobfs/bdev/blobfs_bdev.o
00:03:06.643    CC module/bdev/lvol/vbdev_lvol.o
00:03:06.643    CC module/bdev/passthru/vbdev_passthru.o
00:03:06.643    LIB libspdk_sock_posix.a
00:03:06.643    CC module/bdev/null/bdev_null_rpc.o
00:03:06.643    CC module/bdev/gpt/vbdev_gpt.o
00:03:06.643    CC module/blobfs/bdev/blobfs_bdev_rpc.o
00:03:06.643    CC module/bdev/delay/vbdev_delay_rpc.o
00:03:06.643    CC module/bdev/malloc/bdev_malloc_rpc.o
00:03:06.643    CC module/bdev/lvol/vbdev_lvol_rpc.o
00:03:06.643    CC module/bdev/error/vbdev_error_rpc.o
00:03:06.643    CC module/bdev/passthru/vbdev_passthru_rpc.o
00:03:06.643    LIB libspdk_bdev_null.a
00:03:06.643    LIB libspdk_bdev_gpt.a
00:03:06.643    CC module/bdev/nvme/bdev_nvme_rpc.o
00:03:06.643    CC module/bdev/nvme/nvme_rpc.o
00:03:06.643    LIB libspdk_bdev_delay.a
00:03:06.643    LIB libspdk_bdev_malloc.a
00:03:06.643    CC module/bdev/nvme/bdev_mdns_client.o
00:03:06.643    LIB libspdk_bdev_error.a
00:03:06.643    LIB libspdk_blobfs_bdev.a
00:03:06.643    LIB libspdk_bdev_lvol.a
00:03:06.643    CC module/bdev/raid/bdev_raid.o
00:03:06.643    CC module/bdev/raid/bdev_raid_rpc.o
00:03:06.643    CC module/bdev/split/vbdev_split.o
00:03:06.643    LIB libspdk_bdev_passthru.a
00:03:06.643    CC module/bdev/raid/bdev_raid_sb.o
00:03:06.643    CC module/bdev/raid/raid0.o
00:03:06.643    CC module/bdev/split/vbdev_split_rpc.o
00:03:06.643    CC module/bdev/zone_block/vbdev_zone_block.o
00:03:06.958    CC module/bdev/aio/bdev_aio.o
00:03:06.958    CC module/bdev/zone_block/vbdev_zone_block_rpc.o
00:03:06.958    CC module/bdev/aio/bdev_aio_rpc.o
00:03:06.958    CC module/bdev/raid/raid1.o
00:03:06.958    CC module/bdev/raid/concat.o
00:03:06.958    LIB libspdk_bdev_split.a
00:03:06.958    LIB libspdk_bdev_zone_block.a
00:03:06.958    LIB libspdk_bdev_aio.a
00:03:06.958    LIB libspdk_bdev_nvme.a
00:03:06.958    LIB libspdk_bdev_raid.a
00:03:07.216    CC module/event/subsystems/vmd/vmd.o
00:03:07.216    CC module/event/subsystems/vmd/vmd_rpc.o
00:03:07.216    CC module/event/subsystems/scheduler/scheduler.o
00:03:07.216    CC module/event/subsystems/keyring/keyring.o
00:03:07.216    CC module/event/subsystems/iobuf/iobuf.o
00:03:07.216    CC module/event/subsystems/iobuf/iobuf_rpc.o
00:03:07.216    CC module/event/subsystems/sock/sock.o
00:03:07.216    LIB libspdk_event_keyring.a
00:03:07.216    LIB libspdk_event_scheduler.a
00:03:07.216    LIB libspdk_event_sock.a
00:03:07.216    LIB libspdk_event_vmd.a
00:03:07.216    LIB libspdk_event_iobuf.a
00:03:07.474    CC module/event/subsystems/accel/accel.o
00:03:07.474    LIB libspdk_event_accel.a
00:03:07.474    CC module/event/subsystems/bdev/bdev.o
00:03:07.737    LIB libspdk_event_bdev.a
00:03:07.737    CC module/event/subsystems/nvmf/nvmf_rpc.o
00:03:07.737    CC module/event/subsystems/nvmf/nvmf_tgt.o
00:03:07.737    CC module/event/subsystems/scsi/scsi.o
00:03:07.737    LIB libspdk_event_scsi.a
00:03:07.995    LIB libspdk_event_nvmf.a
00:03:07.995    CC module/event/subsystems/iscsi/iscsi.o
00:03:07.995    LIB libspdk_event_iscsi.a
00:03:07.995    CC app/spdk_lspci/spdk_lspci.o
00:03:07.995    CC app/trace_record/trace_record.o
00:03:07.995    CC app/spdk_nvme_identify/identify.o
00:03:07.995    CXX app/trace/trace.o
00:03:07.995    CC app/spdk_nvme_perf/perf.o
00:03:07.995    CC examples/util/zipf/zipf.o
00:03:07.995    CC app/nvmf_tgt/nvmf_main.o
00:03:08.253    CC app/iscsi_tgt/iscsi_tgt.o
00:03:08.253    CC test/thread/poller_perf/poller_perf.o
00:03:08.253    CC app/spdk_tgt/spdk_tgt.o
00:03:08.253    LINK spdk_lspci
00:03:08.253    LINK spdk_trace_record
00:03:08.253    LINK zipf
00:03:08.253    LINK nvmf_tgt
00:03:08.253    LINK poller_perf
00:03:08.253    LINK spdk_tgt
00:03:08.253    LINK iscsi_tgt
00:03:08.253    LINK spdk_nvme_perf
00:03:08.253    CC test/dma/test_dma/test_dma.o
00:03:08.253    CC examples/ioat/perf/perf.o
00:03:08.253    LINK spdk_nvme_identify
00:03:08.253    CC test/thread/lock/spdk_lock.o
00:03:08.253    CC test/app/bdev_svc/bdev_svc.o
00:03:08.253    CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o
00:03:08.253    LINK ioat_perf
00:03:08.253    LINK bdev_svc
00:03:08.253    TEST_HEADER include/spdk/accel.h
00:03:08.253    TEST_HEADER include/spdk/accel_module.h
00:03:08.253    TEST_HEADER include/spdk/assert.h
00:03:08.253    CC examples/vmd/lsvmd/lsvmd.o
00:03:08.511    TEST_HEADER include/spdk/barrier.h
00:03:08.511    TEST_HEADER include/spdk/base64.h
00:03:08.511    TEST_HEADER include/spdk/bdev.h
00:03:08.511    TEST_HEADER include/spdk/bdev_module.h
00:03:08.511    TEST_HEADER include/spdk/bdev_zone.h
00:03:08.511    TEST_HEADER include/spdk/bit_array.h
00:03:08.511    TEST_HEADER include/spdk/bit_pool.h
00:03:08.511    TEST_HEADER include/spdk/blob.h
00:03:08.511    TEST_HEADER include/spdk/blob_bdev.h
00:03:08.511    TEST_HEADER include/spdk/blobfs.h
00:03:08.511    TEST_HEADER include/spdk/blobfs_bdev.h
00:03:08.511    TEST_HEADER include/spdk/conf.h
00:03:08.511    TEST_HEADER include/spdk/config.h
00:03:08.511    TEST_HEADER include/spdk/cpuset.h
00:03:08.511    TEST_HEADER include/spdk/crc16.h
00:03:08.511    TEST_HEADER include/spdk/crc32.h
00:03:08.511    TEST_HEADER include/spdk/crc64.h
00:03:08.511    TEST_HEADER include/spdk/dif.h
00:03:08.511    TEST_HEADER include/spdk/dma.h
00:03:08.511    TEST_HEADER include/spdk/endian.h
00:03:08.511    TEST_HEADER include/spdk/env.h
00:03:08.511    TEST_HEADER include/spdk/env_dpdk.h
00:03:08.511    TEST_HEADER include/spdk/event.h
00:03:08.511    TEST_HEADER include/spdk/fd.h
00:03:08.511    TEST_HEADER include/spdk/fd_group.h
00:03:08.511    TEST_HEADER include/spdk/file.h
00:03:08.511    TEST_HEADER include/spdk/fsdev.h
00:03:08.511    TEST_HEADER include/spdk/fsdev_module.h
00:03:08.511    TEST_HEADER include/spdk/ftl.h
00:03:08.511    TEST_HEADER include/spdk/fuse_dispatcher.h
00:03:08.511    TEST_HEADER include/spdk/gpt_spec.h
00:03:08.511    TEST_HEADER include/spdk/hexlify.h
00:03:08.511    CC examples/ioat/verify/verify.o
00:03:08.511    TEST_HEADER include/spdk/histogram_data.h
00:03:08.511    TEST_HEADER include/spdk/idxd.h
00:03:08.511    TEST_HEADER include/spdk/idxd_spec.h
00:03:08.511    TEST_HEADER include/spdk/init.h
00:03:08.511    TEST_HEADER include/spdk/ioat.h
00:03:08.511    TEST_HEADER include/spdk/ioat_spec.h
00:03:08.511    CC examples/idxd/perf/perf.o
00:03:08.511    TEST_HEADER include/spdk/iscsi_spec.h
00:03:08.511    TEST_HEADER include/spdk/json.h
00:03:08.511    LINK lsvmd
00:03:08.511    TEST_HEADER include/spdk/jsonrpc.h
00:03:08.511    TEST_HEADER include/spdk/keyring.h
00:03:08.511    TEST_HEADER include/spdk/keyring_module.h
00:03:08.511    TEST_HEADER include/spdk/likely.h
00:03:08.511    TEST_HEADER include/spdk/log.h
00:03:08.511    LINK nvme_fuzz
00:03:08.511    TEST_HEADER include/spdk/lvol.h
00:03:08.511    TEST_HEADER include/spdk/md5.h
00:03:08.511    TEST_HEADER include/spdk/memory.h
00:03:08.511    TEST_HEADER include/spdk/mmio.h
00:03:08.511    TEST_HEADER include/spdk/nbd.h
00:03:08.511    TEST_HEADER include/spdk/net.h
00:03:08.511    TEST_HEADER include/spdk/notify.h
00:03:08.511    TEST_HEADER include/spdk/nvme.h
00:03:08.511    TEST_HEADER include/spdk/nvme_intel.h
00:03:08.511    TEST_HEADER include/spdk/nvme_ocssd.h
00:03:08.511    TEST_HEADER include/spdk/nvme_ocssd_spec.h
00:03:08.511    TEST_HEADER include/spdk/nvme_spec.h
00:03:08.511    TEST_HEADER include/spdk/nvme_zns.h
00:03:08.511    CC test/env/mem_callbacks/mem_callbacks.o
00:03:08.511    TEST_HEADER include/spdk/nvmf.h
00:03:08.511    LINK test_dma
00:03:08.511    TEST_HEADER include/spdk/nvmf_cmd.h
00:03:08.511    TEST_HEADER include/spdk/nvmf_fc_spec.h
00:03:08.511    TEST_HEADER include/spdk/nvmf_spec.h
00:03:08.511    TEST_HEADER include/spdk/nvmf_transport.h
00:03:08.511    TEST_HEADER include/spdk/opal.h
00:03:08.511    TEST_HEADER include/spdk/opal_spec.h
00:03:08.511    TEST_HEADER include/spdk/pci_ids.h
00:03:08.511    TEST_HEADER include/spdk/pipe.h
00:03:08.511    TEST_HEADER include/spdk/queue.h
00:03:08.511    TEST_HEADER include/spdk/reduce.h
00:03:08.511    TEST_HEADER include/spdk/rpc.h
00:03:08.511    TEST_HEADER include/spdk/scheduler.h
00:03:08.511    TEST_HEADER include/spdk/scsi.h
00:03:08.511    TEST_HEADER include/spdk/scsi_spec.h
00:03:08.511    TEST_HEADER include/spdk/sock.h
00:03:08.511    TEST_HEADER include/spdk/stdinc.h
00:03:08.511    TEST_HEADER include/spdk/string.h
00:03:08.511    TEST_HEADER include/spdk/thread.h
00:03:08.511    TEST_HEADER include/spdk/trace.h
00:03:08.511    TEST_HEADER include/spdk/trace_parser.h
00:03:08.511    TEST_HEADER include/spdk/tree.h
00:03:08.511    CC app/spdk_nvme_discover/discovery_aer.o
00:03:08.511    TEST_HEADER include/spdk/ublk.h
00:03:08.511    TEST_HEADER include/spdk/util.h
00:03:08.511    LINK verify
00:03:08.511    TEST_HEADER include/spdk/uuid.h
00:03:08.511    TEST_HEADER include/spdk/version.h
00:03:08.511    TEST_HEADER include/spdk/vfio_user_pci.h
00:03:08.511    TEST_HEADER include/spdk/vfio_user_spec.h
00:03:08.511    TEST_HEADER include/spdk/vhost.h
00:03:08.511    TEST_HEADER include/spdk/vmd.h
00:03:08.511    TEST_HEADER include/spdk/xor.h
00:03:08.511    TEST_HEADER include/spdk/zipf.h
00:03:08.511    CXX test/cpp_headers/accel.o
00:03:08.511    LINK spdk_lock
00:03:08.511    CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o
00:03:08.511    CC examples/vmd/led/led.o
00:03:08.511    LINK idxd_perf
00:03:08.511    LINK spdk_nvme_discover
00:03:08.511    CC test/app/histogram_perf/histogram_perf.o
00:03:08.511    CC test/env/vtophys/vtophys.o
00:03:08.511    LINK led
00:03:08.511    CC test/app/jsoncat/jsoncat.o
00:03:08.511    CXX test/cpp_headers/accel_module.o
00:03:08.511    LINK histogram_perf
00:03:08.511    CC test/rpc_client/rpc_client_test.o
00:03:08.511    LINK vtophys
00:03:08.768    LINK jsoncat
00:03:08.768    CC test/app/stub/stub.o
00:03:08.768    CXX test/cpp_headers/assert.o
00:03:08.768    LINK rpc_client_test
00:03:08.768    LINK spdk_trace
00:03:08.768    CC examples/thread/thread/thread_ex.o
00:03:08.768    LINK stub
00:03:08.768    CXX test/cpp_headers/barrier.o
00:03:08.768    CC test/env/env_dpdk_post_init/env_dpdk_post_init.o
00:03:08.768    CC examples/sock/hello_world/hello_sock.o
00:03:08.768    CC app/spdk_top/spdk_top.o
00:03:08.768    CC test/unit/include/spdk/histogram_data.h/histogram_ut.o
00:03:08.768    CXX test/cpp_headers/base64.o
00:03:08.768    CC test/env/memory/memory_ut.o
00:03:08.768    LINK env_dpdk_post_init
00:03:08.768    LINK mem_callbacks
00:03:08.768    LINK hello_sock
00:03:08.768    LINK thread
00:03:08.768    LINK histogram_ut
00:03:08.768    LINK iscsi_fuzz
00:03:09.027    CC app/fio/nvme/fio_plugin.o
00:03:09.027    CC test/env/pci/pci_ut.o
00:03:09.027    CXX test/cpp_headers/bdev.o
00:03:09.027    CC test/unit/lib/log/log.c/log_ut.o
00:03:09.027    CC examples/nvme/hello_world/hello_world.o
00:03:09.027    CC test/accel/dif/dif.o
00:03:09.027    CXX test/cpp_headers/bdev_module.o
00:03:09.027    CC examples/nvme/reconnect/reconnect.o
00:03:09.027    LINK log_ut
00:03:09.027    LINK hello_world
00:03:09.027    LINK spdk_top
00:03:09.027    LINK pci_ut
00:03:09.027    LINK reconnect
00:03:09.027    CXX test/cpp_headers/bdev_zone.o
00:03:09.027    CC test/unit/lib/util/base64.c/base64_ut.o
00:03:09.027    LINK spdk_nvme
00:03:09.027    LINK dif
00:03:09.027    CC test/unit/lib/rdma/common.c/common_ut.o
00:03:09.027    CC examples/accel/perf/accel_perf.o
00:03:09.285    CC examples/nvme/nvme_manage/nvme_manage.o
00:03:09.285    CC examples/blob/hello_world/hello_blob.o
00:03:09.285    LINK base64_ut
00:03:09.285    CXX test/cpp_headers/bit_array.o
00:03:09.285    CC test/blobfs/mkfs/mkfs.o
00:03:09.285    CC app/fio/bdev/fio_plugin.o
00:03:09.285    LINK memory_ut
00:03:09.285    CC test/unit/lib/util/bit_array.c/bit_array_ut.o
00:03:09.285    CXX test/cpp_headers/bit_pool.o
00:03:09.285    CC examples/blob/cli/blobcli.o
00:03:09.285    LINK accel_perf
00:03:09.285    LINK hello_blob
00:03:09.285    LINK common_ut
00:03:09.285    LINK nvme_manage
00:03:09.285    CXX test/cpp_headers/blob.o
00:03:09.285    LINK mkfs
00:03:09.285    CC examples/nvme/arbitration/arbitration.o
00:03:09.285    LINK spdk_bdev
00:03:09.285    LINK bit_array_ut
00:03:09.285    CC test/unit/lib/dma/dma.c/dma_ut.o
00:03:09.285    LINK blobcli
00:03:09.542    CC test/unit/lib/ioat/ioat.c/ioat_ut.o
00:03:09.542    CC test/event/event_perf/event_perf.o
00:03:09.542    CXX test/cpp_headers/blob_bdev.o
00:03:09.542    CC examples/bdev/hello_world/hello_bdev.o
00:03:09.542    LINK arbitration
00:03:09.542    CC test/unit/lib/util/cpuset.c/cpuset_ut.o
00:03:09.542    CC examples/bdev/bdevperf/bdevperf.o
00:03:09.542    LINK event_perf
00:03:09.542    CC test/unit/lib/util/crc16.c/crc16_ut.o
00:03:09.542    CXX test/cpp_headers/blobfs.o
00:03:09.542    LINK hello_bdev
00:03:09.542    LINK crc16_ut
00:03:09.542    LINK ioat_ut
00:03:09.542    CC examples/nvme/hotplug/hotplug.o
00:03:09.542    CC test/event/reactor/reactor.o
00:03:09.542    LINK dma_ut
00:03:09.542    CXX test/cpp_headers/blobfs_bdev.o
00:03:09.542    CC examples/nvme/cmb_copy/cmb_copy.o
00:03:09.542    LINK cpuset_ut
00:03:09.542    CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o
00:03:09.542  gmake[2]: Nothing to be done for 'all'.
00:03:09.542    LINK reactor
00:03:09.542    CXX test/cpp_headers/conf.o
00:03:09.542    CC test/event/reactor_perf/reactor_perf.o
00:03:09.542    LINK hotplug
00:03:09.542    LINK cmb_copy
00:03:09.798    LINK bdevperf
00:03:09.798    CC test/unit/lib/util/crc32c.c/crc32c_ut.o
00:03:09.798    CC test/nvme/aer/aer.o
00:03:09.798    CXX test/cpp_headers/config.o
00:03:09.798    LINK reactor_perf
00:03:09.798    LINK crc32_ieee_ut
00:03:09.798    CXX test/cpp_headers/cpuset.o
00:03:09.798    CXX test/cpp_headers/crc16.o
00:03:09.798    CC examples/nvme/abort/abort.o
00:03:09.798    LINK crc32c_ut
00:03:09.798    LINK aer
00:03:09.798    CC test/unit/lib/util/crc64.c/crc64_ut.o
00:03:09.798    CXX test/cpp_headers/crc32.o
00:03:09.798    CC test/nvme/reset/reset.o
00:03:09.798    CC test/bdev/bdevio/bdevio.o
00:03:09.798    CC examples/nvme/pmr_persistence/pmr_persistence.o
00:03:09.798    CC test/nvme/sgl/sgl.o
00:03:09.798    LINK crc64_ut
00:03:09.798    LINK abort
00:03:09.798    CC test/unit/lib/util/dif.c/dif_ut.o
00:03:09.798    LINK pmr_persistence
00:03:09.798    CXX test/cpp_headers/crc64.o
00:03:09.798    LINK sgl
00:03:09.798    CC test/nvme/e2edp/nvme_dp.o
00:03:09.798    LINK reset
00:03:09.798    CXX test/cpp_headers/dif.o
00:03:09.798    CC test/nvme/overhead/overhead.o
00:03:09.798    CC test/unit/lib/util/file.c/file_ut.o
00:03:10.055    LINK bdevio
00:03:10.056    CC test/unit/lib/util/iov.c/iov_ut.o
00:03:10.056    CXX test/cpp_headers/dma.o
00:03:10.056    LINK nvme_dp
00:03:10.056    LINK file_ut
00:03:10.056    CXX test/cpp_headers/endian.o
00:03:10.056    LINK overhead
00:03:10.056    CC test/unit/lib/util/math.c/math_ut.o
00:03:10.056    CC examples/nvmf/nvmf/nvmf.o
00:03:10.056    CC test/nvme/err_injection/err_injection.o
00:03:10.056    LINK iov_ut
00:03:10.056    CC test/nvme/startup/startup.o
00:03:10.056    CC test/unit/lib/util/net.c/net_ut.o
00:03:10.056    LINK math_ut
00:03:10.056    CC test/nvme/reserve/reserve.o
00:03:10.056    CXX test/cpp_headers/env.o
00:03:10.056    CC test/nvme/simple_copy/simple_copy.o
00:03:10.056    LINK err_injection
00:03:10.056    LINK net_ut
00:03:10.056    CC test/nvme/connect_stress/connect_stress.o
00:03:10.056    CC test/nvme/boot_partition/boot_partition.o
00:03:10.056    LINK startup
00:03:10.056    LINK nvmf
00:03:10.056    CXX test/cpp_headers/env_dpdk.o
00:03:10.056    LINK reserve
00:03:10.056    LINK simple_copy
00:03:10.313    LINK dif_ut
00:03:10.313    LINK boot_partition
00:03:10.313    CC test/unit/lib/util/pipe.c/pipe_ut.o
00:03:10.313    CC test/nvme/compliance/nvme_compliance.o
00:03:10.313    CC test/unit/lib/util/string.c/string_ut.o
00:03:10.313    CXX test/cpp_headers/event.o
00:03:10.313    LINK connect_stress
00:03:10.313    CC test/nvme/fused_ordering/fused_ordering.o
00:03:10.313    CXX test/cpp_headers/fd.o
00:03:10.313    CC test/unit/lib/util/xor.c/xor_ut.o
00:03:10.313    CC test/nvme/doorbell_aers/doorbell_aers.o
00:03:10.313    CC test/nvme/fdp/fdp.o
00:03:10.313    CXX test/cpp_headers/fd_group.o
00:03:10.313    LINK string_ut
00:03:10.313    LINK fused_ordering
00:03:10.313    CXX test/cpp_headers/file.o
00:03:10.313    CXX test/cpp_headers/fsdev.o
00:03:10.313    LINK doorbell_aers
00:03:10.313    LINK pipe_ut
00:03:10.313    CXX test/cpp_headers/fsdev_module.o
00:03:10.313    LINK nvme_compliance
00:03:10.313    CXX test/cpp_headers/ftl.o
00:03:10.313    CXX test/cpp_headers/fuse_dispatcher.o
00:03:10.313    LINK fdp
00:03:10.313    CXX test/cpp_headers/gpt_spec.o
00:03:10.313    CXX test/cpp_headers/hexlify.o
00:03:10.313    LINK xor_ut
00:03:10.313    CXX test/cpp_headers/histogram_data.o
00:03:10.313    CXX test/cpp_headers/idxd.o
00:03:10.313    CXX test/cpp_headers/idxd_spec.o
00:03:10.313    CXX test/cpp_headers/init.o
00:03:10.572    CXX test/cpp_headers/ioat.o
00:03:10.572    CC test/unit/lib/json/json_parse.c/json_parse_ut.o
00:03:10.572    CC test/unit/lib/json/json_util.c/json_util_ut.o
00:03:10.572    CC test/unit/lib/json/json_write.c/json_write_ut.o
00:03:10.572    CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o
00:03:10.572    CXX test/cpp_headers/ioat_spec.o
00:03:10.572    CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o
00:03:10.572    CXX test/cpp_headers/iscsi_spec.o
00:03:10.572    CC test/unit/lib/idxd/idxd.c/idxd_ut.o
00:03:10.572    CXX test/cpp_headers/json.o
00:03:10.572    CXX test/cpp_headers/jsonrpc.o
00:03:10.572    LINK pci_event_ut
00:03:10.572    CXX test/cpp_headers/keyring.o
00:03:10.572    CXX test/cpp_headers/keyring_module.o
00:03:10.572    CXX test/cpp_headers/likely.o
00:03:10.572    CXX test/cpp_headers/log.o
00:03:10.572    LINK idxd_user_ut
00:03:10.572    CXX test/cpp_headers/lvol.o
00:03:10.572    CXX test/cpp_headers/md5.o
00:03:10.572    LINK json_util_ut
00:03:10.828    CXX test/cpp_headers/memory.o
00:03:10.828    LINK idxd_ut
00:03:10.828    CXX test/cpp_headers/mmio.o
00:03:10.828    CXX test/cpp_headers/nbd.o
00:03:10.828    CXX test/cpp_headers/net.o
00:03:10.828    LINK json_write_ut
00:03:10.828    CXX test/cpp_headers/notify.o
00:03:10.828    CXX test/cpp_headers/nvme.o
00:03:10.828    CXX test/cpp_headers/nvme_intel.o
00:03:10.828    CXX test/cpp_headers/nvme_ocssd.o
00:03:10.828    CXX test/cpp_headers/nvme_ocssd_spec.o
00:03:10.828    CXX test/cpp_headers/nvme_spec.o
00:03:10.828    LINK json_parse_ut
00:03:10.828    CXX test/cpp_headers/nvme_zns.o
00:03:10.828    CXX test/cpp_headers/nvmf.o
00:03:10.828    CXX test/cpp_headers/nvmf_cmd.o
00:03:10.828    CXX test/cpp_headers/nvmf_fc_spec.o
00:03:10.828    CXX test/cpp_headers/nvmf_spec.o
00:03:10.828    CXX test/cpp_headers/nvmf_transport.o
00:03:10.828    CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o
00:03:10.828    CXX test/cpp_headers/opal.o
00:03:10.828    CXX test/cpp_headers/opal_spec.o
00:03:10.828    CXX test/cpp_headers/pci_ids.o
00:03:10.828    CXX test/cpp_headers/pipe.o
00:03:10.829    CXX test/cpp_headers/queue.o
00:03:10.829    CXX test/cpp_headers/reduce.o
00:03:10.829    CXX test/cpp_headers/rpc.o
00:03:10.829    CXX test/cpp_headers/scheduler.o
00:03:11.085    LINK jsonrpc_server_ut
00:03:11.085    CXX test/cpp_headers/scsi.o
00:03:11.085    CXX test/cpp_headers/scsi_spec.o
00:03:11.085    CXX test/cpp_headers/sock.o
00:03:11.085    CXX test/cpp_headers/stdinc.o
00:03:11.085    CXX test/cpp_headers/string.o
00:03:11.085    CXX test/cpp_headers/thread.o
00:03:11.085    CXX test/cpp_headers/trace.o
00:03:11.085    CXX test/cpp_headers/trace_parser.o
00:03:11.085    CXX test/cpp_headers/tree.o
00:03:11.085    CC test/unit/lib/rpc/rpc.c/rpc_ut.o
00:03:11.085    CXX test/cpp_headers/ublk.o
00:03:11.085    CXX test/cpp_headers/util.o
00:03:11.085    CXX test/cpp_headers/uuid.o
00:03:11.085    CXX test/cpp_headers/version.o
00:03:11.085    CXX test/cpp_headers/vfio_user_pci.o
00:03:11.085    CXX test/cpp_headers/vfio_user_spec.o
00:03:11.085    CXX test/cpp_headers/vhost.o
00:03:11.085    CXX test/cpp_headers/vmd.o
00:03:11.085    CXX test/cpp_headers/xor.o
00:03:11.085    CXX test/cpp_headers/zipf.o
00:03:11.342    LINK rpc_ut
00:03:11.342    CC test/unit/lib/thread/thread.c/thread_ut.o
00:03:11.342    CC test/unit/lib/thread/iobuf.c/iobuf_ut.o
00:03:11.342    CC test/unit/lib/sock/posix.c/posix_ut.o
00:03:11.342    CC test/unit/lib/sock/sock.c/sock_ut.o
00:03:11.342    CC test/unit/lib/notify/notify.c/notify_ut.o
00:03:11.342    CC test/unit/lib/keyring/keyring.c/keyring_ut.o
00:03:11.600    LINK keyring_ut
00:03:11.600    LINK notify_ut
00:03:11.600    LINK iobuf_ut
00:03:11.600    LINK posix_ut
00:03:11.857    LINK thread_ut
00:03:11.857    LINK sock_ut
00:03:11.857    CC test/unit/lib/init/subsystem.c/subsystem_ut.o
00:03:11.857    CC test/unit/lib/init/rpc.c/rpc_ut.o
00:03:11.857    CC test/unit/lib/accel/accel.c/accel_ut.o
00:03:11.857    CC test/unit/lib/blob/blob.c/blob_ut.o
00:03:11.857    CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o
00:03:11.857    CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o
00:03:11.857    CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o
00:03:11.857    CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o
00:03:11.857    CC test/unit/lib/nvme/nvme.c/nvme_ut.o
00:03:11.857    CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o
00:03:11.857    LINK rpc_ut
00:03:12.115    CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o
00:03:12.115    LINK subsystem_ut
00:03:12.115    LINK blob_bdev_ut
00:03:12.115    CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o
00:03:12.115    CC test/unit/lib/event/app.c/app_ut.o
00:03:12.374    LINK nvme_ctrlr_ocssd_cmd_ut
00:03:12.374    LINK accel_ut
00:03:12.374    CC test/unit/lib/event/reactor.c/reactor_ut.o
00:03:12.374    LINK app_ut
00:03:12.374    CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o
00:03:12.374    LINK nvme_ns_ut
00:03:12.374    CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o
00:03:12.374    LINK nvme_ctrlr_cmd_ut
00:03:12.374    CC test/unit/lib/bdev/part.c/part_ut.o
00:03:12.374    CC test/unit/lib/bdev/bdev.c/bdev_ut.o
00:03:12.632    LINK nvme_ut
00:03:12.632    LINK nvme_ns_cmd_ut
00:03:12.632    CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o
00:03:12.632    LINK nvme_ns_ocssd_cmd_ut
00:03:12.632    LINK reactor_ut
00:03:12.632    CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o
00:03:12.632    LINK scsi_nvme_ut
00:03:12.632    CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o
00:03:12.632    LINK nvme_ctrlr_ut
00:03:12.889    CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o
00:03:12.890    CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o
00:03:12.890    CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o
00:03:12.890    LINK nvme_poll_group_ut
00:03:12.890    CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o
00:03:12.890    LINK gpt_ut
00:03:12.890    CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o
00:03:12.890    LINK nvme_quirks_ut
00:03:13.148    LINK nvme_pcie_ut
00:03:13.148    LINK blob_ut
00:03:13.148    CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o
00:03:13.148    LINK nvme_qpair_ut
00:03:13.148    CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o
00:03:13.148    CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o
00:03:13.148    LINK vbdev_lvol_ut
00:03:13.148    CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o
00:03:13.148    LINK part_ut
00:03:13.148    LINK bdev_zone_ut
00:03:13.148    CC test/unit/lib/blobfs/tree.c/tree_ut.o
00:03:13.411    CC test/unit/lib/bdev/raid/concat.c/concat_ut.o
00:03:13.411    CC test/unit/lib/lvol/lvol.c/lvol_ut.o
00:03:13.411    LINK tree_ut
00:03:13.411    LINK bdev_raid_sb_ut
00:03:13.411    CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o
00:03:13.411    CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o
00:03:13.411    LINK nvme_tcp_ut
00:03:13.411    LINK nvme_transport_ut
00:03:13.411    LINK concat_ut
00:03:13.411    CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o
00:03:13.411    LINK bdev_ut
00:03:13.411    CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o
00:03:13.411    CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o
00:03:13.670    LINK bdev_ut
00:03:13.670    LINK bdev_raid_ut
00:03:13.670    CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o
00:03:13.670    CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o
00:03:13.670    CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o
00:03:13.670    LINK blobfs_bdev_ut
00:03:13.670    LINK nvme_io_msg_ut
00:03:13.670    LINK vbdev_zone_block_ut
00:03:13.670    LINK blobfs_async_ut
00:03:13.670    CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o
00:03:13.670    CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o
00:03:13.670    CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o
00:03:13.670    LINK lvol_ut
00:03:13.670    LINK blobfs_sync_ut
00:03:13.670    LINK raid1_ut
00:03:13.928    LINK raid0_ut
00:03:13.928    LINK nvme_opal_ut
00:03:13.928    LINK nvme_fabric_ut
00:03:14.186    LINK nvme_pcie_common_ut
00:03:14.444    LINK nvme_rdma_ut
00:03:14.701    LINK bdev_nvme_ut
00:03:14.701    CC test/unit/lib/nvmf/rdma.c/rdma_ut.o
00:03:14.701    CC test/unit/lib/nvmf/transport.c/transport_ut.o
00:03:14.701    CC test/unit/lib/nvmf/tcp.c/tcp_ut.o
00:03:14.701    CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o
00:03:14.701    CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o
00:03:14.701    CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o
00:03:14.701    CC test/unit/lib/scsi/dev.c/dev_ut.o
00:03:14.701    CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o
00:03:14.701    CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o
00:03:14.701    CC test/unit/lib/nvmf/auth.c/auth_ut.o
00:03:14.959    LINK dev_ut
00:03:14.959    CC test/unit/lib/scsi/lun.c/lun_ut.o
00:03:14.959    LINK ctrlr_bdev_ut
00:03:14.959    LINK nvmf_ut
00:03:14.959    CC test/unit/lib/scsi/scsi.c/scsi_ut.o
00:03:14.959    CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o
00:03:15.217    LINK lun_ut
00:03:15.782    LINK rdma_ut
00:03:15.782    LINK ctrlr_ut
00:03:15.782    LINK ctrlr_discovery_ut
00:03:15.782    LINK tcp_ut
00:03:15.782    LINK transport_ut
00:03:15.782    LINK subsystem_ut
00:03:15.782    LINK auth_ut
00:03:15.782    LINK scsi_ut
00:03:16.040    CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o
00:03:16.040    LINK scsi_bdev_ut
00:03:16.040    LINK scsi_pr_ut
00:03:16.298    CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o
00:03:16.298    CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o
00:03:16.298    CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o
00:03:16.298    CC test/unit/lib/iscsi/conn.c/conn_ut.o
00:03:16.298    CC test/unit/lib/iscsi/param.c/param_ut.o
00:03:16.298    CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o
00:03:16.298    LINK init_grp_ut
00:03:16.298    LINK param_ut
00:03:16.558    LINK tgt_node_ut
00:03:16.558    LINK portal_grp_ut
00:03:16.558    LINK iscsi_ut
00:03:16.819    LINK conn_ut
00:03:16.819  
00:03:16.819  real	0m50.390s
00:03:16.819  user	3m30.767s
00:03:16.819  sys	0m31.579s
00:03:16.819   10:18:08 unittest_build -- common/autotest_common.sh@1130 -- $ xtrace_disable
00:03:16.819   10:18:08 unittest_build -- common/autotest_common.sh@10 -- $ set +x
00:03:16.819  ************************************
00:03:16.819  END TEST unittest_build
00:03:16.819  ************************************
00:03:16.819   10:18:08  -- spdk/autobuild.sh@1 -- $ stop_monitor_resources
00:03:16.819   10:18:08  -- pm/common@29 -- $ signal_monitor_resources TERM
00:03:16.819   10:18:08  -- pm/common@40 -- $ local monitor pid pids signal=TERM
00:03:16.820   10:18:08  -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}"
00:03:16.820   10:18:08  -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]]
00:03:16.820   10:18:08  -- pm/common@44 -- $ pid=1360
00:03:16.820   10:18:08  -- pm/common@50 -- $ kill -TERM 1360
00:03:16.820   10:18:08  -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 ))
00:03:16.820   10:18:08  -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf
00:03:16.820    10:18:08  -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:16.820     10:18:08  -- common/autotest_common.sh@1711 -- # lcov --version
00:03:16.820     10:18:08  -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:16.820    10:18:08  -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:16.820    10:18:08  -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:16.820    10:18:08  -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:16.820    10:18:08  -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:16.820    10:18:08  -- scripts/common.sh@336 -- # IFS=.-:
00:03:16.820    10:18:08  -- scripts/common.sh@336 -- # read -ra ver1
00:03:16.820    10:18:08  -- scripts/common.sh@337 -- # IFS=.-:
00:03:16.820    10:18:08  -- scripts/common.sh@337 -- # read -ra ver2
00:03:16.820    10:18:08  -- scripts/common.sh@338 -- # local 'op=<'
00:03:16.820    10:18:08  -- scripts/common.sh@340 -- # ver1_l=2
00:03:16.820    10:18:08  -- scripts/common.sh@341 -- # ver2_l=1
00:03:16.820    10:18:08  -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:16.820    10:18:08  -- scripts/common.sh@344 -- # case "$op" in
00:03:16.820    10:18:08  -- scripts/common.sh@345 -- # : 1
00:03:16.820    10:18:08  -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:16.820    10:18:08  -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:16.820     10:18:08  -- scripts/common.sh@365 -- # decimal 1
00:03:16.820     10:18:08  -- scripts/common.sh@353 -- # local d=1
00:03:16.820     10:18:08  -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:16.820     10:18:08  -- scripts/common.sh@355 -- # echo 1
00:03:16.820    10:18:08  -- scripts/common.sh@365 -- # ver1[v]=1
00:03:16.820     10:18:08  -- scripts/common.sh@366 -- # decimal 2
00:03:16.820     10:18:08  -- scripts/common.sh@353 -- # local d=2
00:03:16.820     10:18:08  -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:16.820     10:18:08  -- scripts/common.sh@355 -- # echo 2
00:03:16.820    10:18:08  -- scripts/common.sh@366 -- # ver2[v]=2
00:03:16.820    10:18:08  -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:16.820    10:18:08  -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:16.820    10:18:08  -- scripts/common.sh@368 -- # return 0
00:03:16.820    10:18:08  -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:16.820    10:18:08  -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:16.820  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:16.820  		--rc genhtml_branch_coverage=1
00:03:16.820  		--rc genhtml_function_coverage=1
00:03:16.820  		--rc genhtml_legend=1
00:03:16.820  		--rc geninfo_all_blocks=1
00:03:16.820  		--rc geninfo_unexecuted_blocks=1
00:03:16.820  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:16.820  		'
00:03:16.820    10:18:08  -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:16.820  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:16.820  		--rc genhtml_branch_coverage=1
00:03:16.820  		--rc genhtml_function_coverage=1
00:03:16.820  		--rc genhtml_legend=1
00:03:16.820  		--rc geninfo_all_blocks=1
00:03:16.820  		--rc geninfo_unexecuted_blocks=1
00:03:16.820  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:16.820  		'
00:03:16.820    10:18:08  -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:16.820  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:16.820  		--rc genhtml_branch_coverage=1
00:03:16.820  		--rc genhtml_function_coverage=1
00:03:16.820  		--rc genhtml_legend=1
00:03:16.820  		--rc geninfo_all_blocks=1
00:03:16.820  		--rc geninfo_unexecuted_blocks=1
00:03:16.820  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:16.820  		'
00:03:16.820    10:18:08  -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:16.820  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:16.820  		--rc genhtml_branch_coverage=1
00:03:16.820  		--rc genhtml_function_coverage=1
00:03:16.820  		--rc genhtml_legend=1
00:03:16.820  		--rc geninfo_all_blocks=1
00:03:16.820  		--rc geninfo_unexecuted_blocks=1
00:03:16.820  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:16.820  		'
00:03:16.820   10:18:08  -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:03:16.820     10:18:08  -- nvmf/common.sh@7 -- # uname -s
00:03:16.820    10:18:08  -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]]
00:03:16.820    10:18:08  -- nvmf/common.sh@7 -- # return 0
00:03:16.820   10:18:08  -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']'
00:03:16.820    10:18:08  -- spdk/autotest.sh@32 -- # uname -s
00:03:16.820   10:18:08  -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']'
00:03:16.820   10:18:08  -- spdk/autotest.sh@53 -- # start_monitor_resources
00:03:16.820   10:18:08  -- pm/common@17 -- # local monitor
00:03:16.820   10:18:08  -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}"
00:03:16.820   10:18:08  -- pm/common@25 -- # sleep 1
00:03:16.820    10:18:08  -- pm/common@21 -- # date +%s
00:03:16.820   10:18:08  -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1733739488
00:03:17.079  Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1733739488_collect-vmstat.pm.log
00:03:18.014   10:18:09  -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT
00:03:18.014   10:18:09  -- spdk/autotest.sh@57 -- # timing_enter autotest
00:03:18.014   10:18:09  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:18.014   10:18:09  -- common/autotest_common.sh@10 -- # set +x
00:03:18.014   10:18:09  -- spdk/autotest.sh@59 -- # create_test_list
00:03:18.014   10:18:09  -- common/autotest_common.sh@752 -- # xtrace_disable
00:03:18.014   10:18:09  -- common/autotest_common.sh@10 -- # set +x
00:03:18.014     10:18:10  -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh
00:03:18.014    10:18:10  -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk
00:03:18.014   10:18:10  -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk
00:03:18.014   10:18:10  -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output
00:03:18.014   10:18:10  -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk
00:03:18.014   10:18:10  -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod
00:03:18.014    10:18:10  -- common/autotest_common.sh@1457 -- # uname
00:03:18.014   10:18:10  -- common/autotest_common.sh@1457 -- # '[' FreeBSD = FreeBSD ']'
00:03:18.014   10:18:10  -- common/autotest_common.sh@1458 -- # kldunload contigmem.ko
00:03:18.014  kldunload: can't find file contigmem.ko
00:03:18.014   10:18:10  -- common/autotest_common.sh@1458 -- # true
00:03:18.014   10:18:10  -- common/autotest_common.sh@1459 -- # '[' -n '' ']'
00:03:18.014   10:18:10  -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/
00:03:18.014   10:18:10  -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/
00:03:18.014   10:18:10  -- common/autotest_common.sh@1467 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/
00:03:18.014   10:18:10  -- common/autotest_common.sh@1468 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/
00:03:18.014   10:18:10  -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf
00:03:18.014    10:18:10  -- common/autotest_common.sh@1477 -- # uname
00:03:18.014   10:18:10  -- common/autotest_common.sh@1477 -- # [[ FreeBSD = FreeBSD ]]
00:03:18.014    10:18:10  -- common/autotest_common.sh@1477 -- # sysctl -n kern.ipc.maxsockbuf
00:03:18.014   10:18:10  -- common/autotest_common.sh@1477 -- # (( 2097152 < 4194304 ))
00:03:18.014   10:18:10  -- common/autotest_common.sh@1478 -- # sysctl kern.ipc.maxsockbuf=4194304
00:03:18.014  kern.ipc.maxsockbuf: 2097152 -> 4194304
00:03:18.014   10:18:10  -- spdk/autotest.sh@68 -- # [[ y == y ]]
00:03:18.014   10:18:10  -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh --version
00:03:18.014  lcov: LCOV version 1.15
00:03:18.014   10:18:10  -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info
00:03:21.295  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvmf/mdns_server.gcno
00:03:27.881   10:18:20  -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup
00:03:27.881   10:18:20  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:27.881   10:18:20  -- common/autotest_common.sh@10 -- # set +x
00:03:27.881   10:18:20  -- spdk/autotest.sh@78 -- # rm -f
00:03:27.881   10:18:20  -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset
00:03:28.141  kldunload: can't find file contigmem.ko
00:03:28.141  kldunload: can't find file nic_uio.ko
00:03:28.141   10:18:20  -- spdk/autotest.sh@83 -- # get_zoned_devs
00:03:28.141   10:18:20  -- common/autotest_common.sh@1657 -- # zoned_devs=()
00:03:28.141   10:18:20  -- common/autotest_common.sh@1657 -- # local -gA zoned_devs
00:03:28.141   10:18:20  -- common/autotest_common.sh@1658 -- # zoned_ctrls=()
00:03:28.141   10:18:20  -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls
00:03:28.141   10:18:20  -- common/autotest_common.sh@1659 -- # local nvme bdf ns
00:03:28.141   10:18:20  -- spdk/autotest.sh@85 -- # (( 0 > 0 ))
00:03:28.141   10:18:20  -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*)
00:03:28.141   10:18:20  -- spdk/autotest.sh@99 -- # [[ -z '' ]]
00:03:28.141   10:18:20  -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0ns1
00:03:28.141   10:18:20  -- scripts/common.sh@381 -- # local block=/dev/nvme0ns1 pt
00:03:28.141   10:18:20  -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1
00:03:28.141  nvme0ns1 is not a block device
00:03:28.141    10:18:20  -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0ns1
00:03:28.141  /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 394: blkid: command not found
00:03:28.141   10:18:20  -- scripts/common.sh@394 -- # pt=
00:03:28.141   10:18:20  -- scripts/common.sh@395 -- # return 1
00:03:28.141   10:18:20  -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1
00:03:28.141  1+0 records in
00:03:28.141  1+0 records out
00:03:28.141  1048576 bytes transferred in 0.013850 secs (75709885 bytes/sec)
00:03:28.141   10:18:20  -- spdk/autotest.sh@105 -- # sync
00:03:30.688   10:18:22  -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes
00:03:30.688   10:18:22  -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null'
00:03:30.688    10:18:22  -- common/autotest_common.sh@22 -- # reap_spdk_processes
00:03:31.257    10:18:23  -- spdk/autotest.sh@111 -- # uname -s
00:03:31.257   10:18:23  -- spdk/autotest.sh@111 -- # [[ FreeBSD == Linux ]]
00:03:31.257   10:18:23  -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status
00:03:31.257  Contigmem (not present)
00:03:31.257  Buffer Size: not set
00:03:31.257  Num Buffers: not set
00:03:31.257  
00:03:31.257  
00:03:31.258  Type     BDF             Vendor Device Driver          
00:03:31.258  NVMe     0:16:0          0x1b36 0x0010 nvme0           
00:03:31.258    10:18:23  -- spdk/autotest.sh@117 -- # uname -s
00:03:31.258   10:18:23  -- spdk/autotest.sh@117 -- # [[ FreeBSD == Linux ]]
00:03:31.258   10:18:23  -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup
00:03:31.258   10:18:23  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:31.258   10:18:23  -- common/autotest_common.sh@10 -- # set +x
00:03:31.258   10:18:23  -- spdk/autotest.sh@125 -- # timing_enter afterboot
00:03:31.258   10:18:23  -- common/autotest_common.sh@726 -- # xtrace_disable
00:03:31.258   10:18:23  -- common/autotest_common.sh@10 -- # set +x
00:03:31.258   10:18:23  -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:03:31.516  hw.nic_uio.bdfs="0:16:0"
00:03:31.516  hw.contigmem.num_buffers="8"
00:03:31.516  hw.contigmem.buffer_size="268435456"
00:03:31.775   10:18:23  -- spdk/autotest.sh@127 -- # timing_exit afterboot
00:03:31.775   10:18:23  -- common/autotest_common.sh@732 -- # xtrace_disable
00:03:31.775   10:18:23  -- common/autotest_common.sh@10 -- # set +x
00:03:31.775   10:18:23  -- spdk/autotest.sh@131 -- # opal_revert_cleanup
00:03:31.775   10:18:23  -- common/autotest_common.sh@1578 -- # mapfile -t bdfs
00:03:31.775    10:18:23  -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54
00:03:31.775    10:18:23  -- common/autotest_common.sh@1563 -- # bdfs=()
00:03:31.776    10:18:23  -- common/autotest_common.sh@1563 -- # _bdfs=()
00:03:31.776    10:18:23  -- common/autotest_common.sh@1563 -- # local bdfs _bdfs
00:03:31.776    10:18:23  -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs))
00:03:31.776     10:18:23  -- common/autotest_common.sh@1564 -- # get_nvme_bdfs
00:03:31.776     10:18:23  -- common/autotest_common.sh@1498 -- # bdfs=()
00:03:31.776     10:18:23  -- common/autotest_common.sh@1498 -- # local bdfs
00:03:31.776     10:18:23  -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:03:31.776      10:18:23  -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:03:31.776      10:18:23  -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:03:31.776     10:18:23  -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:03:31.776     10:18:23  -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:03:31.776    10:18:23  -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}"
00:03:31.776     10:18:23  -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device
00:03:31.776  cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory
00:03:31.776    10:18:23  -- common/autotest_common.sh@1566 -- # device=
00:03:31.776    10:18:23  -- common/autotest_common.sh@1566 -- # true
00:03:31.776    10:18:23  -- common/autotest_common.sh@1567 -- # [[ '' == \0\x\0\a\5\4 ]]
00:03:31.776    10:18:23  -- common/autotest_common.sh@1572 -- # (( 0 > 0 ))
00:03:31.776    10:18:23  -- common/autotest_common.sh@1572 -- # return 0
00:03:31.776   10:18:23  -- common/autotest_common.sh@1579 -- # [[ -z '' ]]
00:03:31.776   10:18:23  -- common/autotest_common.sh@1580 -- # return 0
00:03:31.776   10:18:23  -- spdk/autotest.sh@137 -- # '[' 1 -eq 1 ']'
00:03:31.776   10:18:23  -- spdk/autotest.sh@138 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:03:31.776   10:18:23  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:31.776   10:18:23  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:31.776   10:18:23  -- common/autotest_common.sh@10 -- # set +x
00:03:31.776  ************************************
00:03:31.776  START TEST unittest
00:03:31.776  ************************************
00:03:31.776   10:18:23 unittest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:03:31.776  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:03:31.776  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit
00:03:31.776  + testdir=/home/vagrant/spdk_repo/spdk/test/unit
00:03:31.776  +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh
00:03:31.776  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../..
00:03:31.776  + rootdir=/home/vagrant/spdk_repo/spdk
00:03:31.776  + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh
00:03:31.776  ++ rpc_py=rpc_cmd
00:03:31.776  ++ set -e
00:03:31.776  ++ shopt -s nullglob
00:03:31.776  ++ shopt -s extglob
00:03:31.776  ++ shopt -s inherit_errexit
00:03:31.776  ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']'
00:03:31.776  ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]]
00:03:31.776  ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh
00:03:31.776  +++ CONFIG_WPDK_DIR=
00:03:31.776  +++ CONFIG_ASAN=n
00:03:31.776  +++ CONFIG_VBDEV_COMPRESS=n
00:03:31.776  +++ CONFIG_HAVE_EXECINFO_H=y
00:03:31.776  +++ CONFIG_USDT=n
00:03:31.776  +++ CONFIG_CUSTOMOCF=n
00:03:31.776  +++ CONFIG_PREFIX=/usr/local
00:03:31.776  +++ CONFIG_RBD=n
00:03:31.776  +++ CONFIG_LIBDIR=
00:03:31.776  +++ CONFIG_IDXD=y
00:03:31.776  +++ CONFIG_NVME_CUSE=n
00:03:31.776  +++ CONFIG_SMA=n
00:03:31.776  +++ CONFIG_VTUNE=n
00:03:31.776  +++ CONFIG_TSAN=n
00:03:31.776  +++ CONFIG_RDMA_SEND_WITH_INVAL=y
00:03:31.776  +++ CONFIG_VFIO_USER_DIR=
00:03:31.776  +++ CONFIG_MAX_NUMA_NODES=1
00:03:31.776  +++ CONFIG_PGO_CAPTURE=n
00:03:31.776  +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n
00:03:31.776  +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:03:31.776  +++ CONFIG_LTO=n
00:03:31.776  +++ CONFIG_ISCSI_INITIATOR=n
00:03:31.776  +++ CONFIG_CET=n
00:03:31.776  +++ CONFIG_VBDEV_COMPRESS_MLX5=n
00:03:31.776  +++ CONFIG_OCF_PATH=
00:03:31.776  +++ CONFIG_RDMA_SET_TOS=y
00:03:31.776  +++ CONFIG_AIO_FSDEV=n
00:03:31.776  +++ CONFIG_HAVE_ARC4RANDOM=y
00:03:31.776  +++ CONFIG_HAVE_LIBARCHIVE=n
00:03:31.776  +++ CONFIG_UBLK=n
00:03:31.776  +++ CONFIG_ISAL_CRYPTO=y
00:03:31.776  +++ CONFIG_OPENSSL_PATH=
00:03:31.776  +++ CONFIG_OCF=n
00:03:31.776  +++ CONFIG_FUSE=n
00:03:31.776  +++ CONFIG_VTUNE_DIR=
00:03:31.776  +++ CONFIG_FUZZER_LIB=
00:03:31.776  +++ CONFIG_FUZZER=n
00:03:31.776  +++ CONFIG_FSDEV=n
00:03:31.776  +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build
00:03:31.776  +++ CONFIG_CRYPTO=n
00:03:31.776  +++ CONFIG_PGO_USE=n
00:03:31.776  +++ CONFIG_VHOST=n
00:03:31.776  +++ CONFIG_DAOS=n
00:03:31.776  +++ CONFIG_DPDK_INC_DIR=
00:03:31.776  +++ CONFIG_DAOS_DIR=
00:03:31.776  +++ CONFIG_UNIT_TESTS=y
00:03:31.776  +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n
00:03:31.776  +++ CONFIG_VIRTIO=n
00:03:31.776  +++ CONFIG_DPDK_UADK=n
00:03:31.776  +++ CONFIG_COVERAGE=y
00:03:31.776  +++ CONFIG_RDMA=y
00:03:31.776  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIM=n
00:03:31.776  +++ CONFIG_HAVE_LZ4=y
00:03:31.776  +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio
00:03:31.776  +++ CONFIG_URING_PATH=
00:03:31.776  +++ CONFIG_XNVME=n
00:03:31.776  +++ CONFIG_VFIO_USER=n
00:03:31.776  +++ CONFIG_ARCH=native
00:03:31.776  +++ CONFIG_HAVE_EVP_MAC=y
00:03:31.776  +++ CONFIG_URING_ZNS=n
00:03:31.776  +++ CONFIG_WERROR=y
00:03:31.776  +++ CONFIG_HAVE_LIBBSD=n
00:03:31.776  +++ CONFIG_UBSAN=n
00:03:31.776  +++ CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n
00:03:31.776  +++ CONFIG_IPSEC_MB_DIR=
00:03:31.776  +++ CONFIG_GOLANG=n
00:03:31.776  +++ CONFIG_ISAL=y
00:03:31.776  +++ CONFIG_IDXD_KERNEL=n
00:03:31.776  +++ CONFIG_DPDK_LIB_DIR=
00:03:31.776  +++ CONFIG_RDMA_PROV=verbs
00:03:31.776  +++ CONFIG_APPS=y
00:03:31.776  +++ CONFIG_SHARED=n
00:03:31.776  +++ CONFIG_HAVE_KEYUTILS=n
00:03:31.776  +++ CONFIG_FC_PATH=
00:03:31.776  +++ CONFIG_DPDK_PKG_CONFIG=n
00:03:31.776  +++ CONFIG_FC=n
00:03:31.776  +++ CONFIG_AVAHI=n
00:03:31.776  +++ CONFIG_FIO_PLUGIN=y
00:03:31.776  +++ CONFIG_RAID5F=n
00:03:31.776  +++ CONFIG_EXAMPLES=y
00:03:31.776  +++ CONFIG_TESTS=y
00:03:31.776  +++ CONFIG_CRYPTO_MLX5=n
00:03:31.776  +++ CONFIG_MAX_LCORES=128
00:03:31.776  +++ CONFIG_IPSEC_MB=n
00:03:31.776  +++ CONFIG_PGO_DIR=
00:03:31.776  +++ CONFIG_DEBUG=y
00:03:31.776  +++ CONFIG_DPDK_COMPRESSDEV=n
00:03:31.776  +++ CONFIG_CROSS_PREFIX=
00:03:31.776  +++ CONFIG_COPY_FILE_RANGE=n
00:03:31.776  +++ CONFIG_URING=n
00:03:31.776  ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:03:31.776  +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh
00:03:31.776  ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common
00:03:31.776  +++ _root=/home/vagrant/spdk_repo/spdk/test/common
00:03:31.776  +++ _root=/home/vagrant/spdk_repo/spdk
00:03:31.776  +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin
00:03:31.776  +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app
00:03:31.776  +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples
00:03:31.776  +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz")
00:03:31.776  +++ ISCSI_APP=("$_app_dir/iscsi_tgt")
00:03:31.776  +++ NVMF_APP=("$_app_dir/nvmf_tgt")
00:03:31.776  +++ VHOST_APP=("$_app_dir/vhost")
00:03:31.776  +++ DD_APP=("$_app_dir/spdk_dd")
00:03:31.776  +++ SPDK_APP=("$_app_dir/spdk_tgt")
00:03:31.776  +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]]
00:03:31.776  +++ [[ #ifndef SPDK_CONFIG_H
00:03:31.776  #define SPDK_CONFIG_H
00:03:31.776  #undef SPDK_CONFIG_AIO_FSDEV
00:03:31.776  #define SPDK_CONFIG_APPS 1
00:03:31.776  #define SPDK_CONFIG_ARCH native
00:03:31.776  #undef SPDK_CONFIG_ASAN
00:03:31.776  #undef SPDK_CONFIG_AVAHI
00:03:31.776  #undef SPDK_CONFIG_CET
00:03:31.776  #undef SPDK_CONFIG_COPY_FILE_RANGE
00:03:31.776  #define SPDK_CONFIG_COVERAGE 1
00:03:31.776  #define SPDK_CONFIG_CROSS_PREFIX 
00:03:31.776  #undef SPDK_CONFIG_CRYPTO
00:03:31.776  #undef SPDK_CONFIG_CRYPTO_MLX5
00:03:31.776  #undef SPDK_CONFIG_CUSTOMOCF
00:03:31.776  #undef SPDK_CONFIG_DAOS
00:03:31.776  #define SPDK_CONFIG_DAOS_DIR 
00:03:31.776  #define SPDK_CONFIG_DEBUG 1
00:03:31.776  #undef SPDK_CONFIG_DPDK_COMPRESSDEV
00:03:31.776  #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build
00:03:31.776  #define SPDK_CONFIG_DPDK_INC_DIR 
00:03:31.776  #define SPDK_CONFIG_DPDK_LIB_DIR 
00:03:31.776  #undef SPDK_CONFIG_DPDK_PKG_CONFIG
00:03:31.776  #undef SPDK_CONFIG_DPDK_UADK
00:03:31.776  #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk
00:03:31.776  #define SPDK_CONFIG_EXAMPLES 1
00:03:31.776  #undef SPDK_CONFIG_FC
00:03:31.776  #define SPDK_CONFIG_FC_PATH 
00:03:31.776  #define SPDK_CONFIG_FIO_PLUGIN 1
00:03:31.776  #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio
00:03:31.776  #undef SPDK_CONFIG_FSDEV
00:03:31.776  #undef SPDK_CONFIG_FUSE
00:03:31.776  #undef SPDK_CONFIG_FUZZER
00:03:31.776  #define SPDK_CONFIG_FUZZER_LIB 
00:03:31.776  #undef SPDK_CONFIG_GOLANG
00:03:31.776  #define SPDK_CONFIG_HAVE_ARC4RANDOM 1
00:03:31.776  #define SPDK_CONFIG_HAVE_EVP_MAC 1
00:03:31.776  #define SPDK_CONFIG_HAVE_EXECINFO_H 1
00:03:31.776  #undef SPDK_CONFIG_HAVE_KEYUTILS
00:03:31.776  #undef SPDK_CONFIG_HAVE_LIBARCHIVE
00:03:31.776  #undef SPDK_CONFIG_HAVE_LIBBSD
00:03:31.776  #define SPDK_CONFIG_HAVE_LZ4 1
00:03:31.776  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM
00:03:31.776  #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC
00:03:31.776  #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1
00:03:31.776  #define SPDK_CONFIG_IDXD 1
00:03:31.776  #undef SPDK_CONFIG_IDXD_KERNEL
00:03:31.776  #undef SPDK_CONFIG_IPSEC_MB
00:03:31.776  #define SPDK_CONFIG_IPSEC_MB_DIR 
00:03:31.776  #define SPDK_CONFIG_ISAL 1
00:03:31.776  #define SPDK_CONFIG_ISAL_CRYPTO 1
00:03:31.776  #undef SPDK_CONFIG_ISCSI_INITIATOR
00:03:31.776  #define SPDK_CONFIG_LIBDIR 
00:03:31.776  #undef SPDK_CONFIG_LTO
00:03:31.776  #define SPDK_CONFIG_MAX_LCORES 128
00:03:31.776  #define SPDK_CONFIG_MAX_NUMA_NODES 1
00:03:31.776  #undef SPDK_CONFIG_NVME_CUSE
00:03:31.776  #undef SPDK_CONFIG_OCF
00:03:31.776  #define SPDK_CONFIG_OCF_PATH 
00:03:31.776  #define SPDK_CONFIG_OPENSSL_PATH 
00:03:31.776  #undef SPDK_CONFIG_PGO_CAPTURE
00:03:31.776  #define SPDK_CONFIG_PGO_DIR 
00:03:31.776  #undef SPDK_CONFIG_PGO_USE
00:03:31.776  #define SPDK_CONFIG_PREFIX /usr/local
00:03:31.776  #undef SPDK_CONFIG_RAID5F
00:03:31.776  #undef SPDK_CONFIG_RBD
00:03:31.776  #define SPDK_CONFIG_RDMA 1
00:03:31.777  #define SPDK_CONFIG_RDMA_PROV verbs
00:03:31.777  #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1
00:03:31.777  #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT
00:03:31.777  #define SPDK_CONFIG_RDMA_SET_TOS 1
00:03:31.777  #undef SPDK_CONFIG_SHARED
00:03:31.777  #undef SPDK_CONFIG_SMA
00:03:31.777  #define SPDK_CONFIG_TESTS 1
00:03:31.777  #undef SPDK_CONFIG_TSAN
00:03:31.777  #undef SPDK_CONFIG_UBLK
00:03:31.777  #undef SPDK_CONFIG_UBSAN
00:03:31.777  #define SPDK_CONFIG_UNIT_TESTS 1
00:03:31.777  #undef SPDK_CONFIG_URING
00:03:31.777  #define SPDK_CONFIG_URING_PATH 
00:03:31.777  #undef SPDK_CONFIG_URING_ZNS
00:03:31.777  #undef SPDK_CONFIG_USDT
00:03:31.777  #undef SPDK_CONFIG_VBDEV_COMPRESS
00:03:31.777  #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5
00:03:31.777  #undef SPDK_CONFIG_VFIO_USER
00:03:31.777  #define SPDK_CONFIG_VFIO_USER_DIR 
00:03:31.777  #undef SPDK_CONFIG_VHOST
00:03:31.777  #undef SPDK_CONFIG_VIRTIO
00:03:31.777  #undef SPDK_CONFIG_VTUNE
00:03:31.777  #define SPDK_CONFIG_VTUNE_DIR 
00:03:31.777  #define SPDK_CONFIG_WERROR 1
00:03:31.777  #define SPDK_CONFIG_WPDK_DIR 
00:03:31.777  #undef SPDK_CONFIG_XNVME
00:03:31.777  #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]]
00:03:31.777  +++ (( SPDK_AUTOTEST_DEBUG_APPS ))
00:03:31.777  ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:03:31.777  +++ shopt -s extglob
00:03:31.777  +++ [[ -e /bin/wpdk_common.sh ]]
00:03:31.777  +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:03:31.777  +++ source /etc/opt/spdk-pkgdep/paths/export.sh
00:03:31.777  ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:03:31.777  ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:03:31.777  ++++ export PATH
00:03:31.777  ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:03:31.777  ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:03:31.777  +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common
00:03:31.777  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:03:31.777  +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm
00:03:31.777  ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../
00:03:31.777  +++ _pmrootdir=/home/vagrant/spdk_repo/spdk
00:03:31.777  +++ TEST_TAG=N/A
00:03:31.777  +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name
00:03:31.777  +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power
00:03:31.777  ++++ uname -s
00:03:31.777  +++ PM_OS=FreeBSD
00:03:31.777  +++ MONITOR_RESOURCES_SUDO=()
00:03:31.777  +++ declare -A MONITOR_RESOURCES_SUDO
00:03:31.777  +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1
00:03:31.777  +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0
00:03:31.777  +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0
00:03:31.777  +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0
00:03:31.777  +++ SUDO[0]=
00:03:31.777  +++ SUDO[1]='sudo -E'
00:03:31.777  +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat)
00:03:31.777  +++ [[ FreeBSD == FreeBSD ]]
00:03:31.777  +++ MONITOR_RESOURCES=(collect-vmstat)
00:03:31.777  +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]]
00:03:31.777  ++ : 0
00:03:31.777  ++ export RUN_NIGHTLY
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_AUTOTEST_DEBUG_APPS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_RUN_VALGRIND
00:03:31.777  ++ : 1
00:03:31.777  ++ export SPDK_RUN_FUNCTIONAL_TEST
00:03:31.777  ++ : 1
00:03:31.777  ++ export SPDK_TEST_UNITTEST
00:03:31.777  ++ :
00:03:31.777  ++ export SPDK_TEST_AUTOBUILD
00:03:31.777  ++ : 1
00:03:31.777  ++ export SPDK_TEST_RELEASE_BUILD
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ISAL
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ISCSI
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ISCSI_INITIATOR
00:03:31.777  ++ : 1
00:03:31.777  ++ export SPDK_TEST_NVME
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_PMR
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_BP
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_CLI
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_CUSE
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_FDP
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVMF
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VFIOUSER
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VFIOUSER_QEMU
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_FUZZER
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_FUZZER_SHORT
00:03:31.777  ++ : rdma
00:03:31.777  ++ export SPDK_TEST_NVMF_TRANSPORT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_RBD
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VHOST
00:03:31.777  ++ : 1
00:03:31.777  ++ export SPDK_TEST_BLOCKDEV
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_RAID
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_IOAT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_BLOBFS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VHOST_INIT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_LVOL
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VBDEV_COMPRESS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_RUN_ASAN
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_RUN_UBSAN
00:03:31.777  ++ :
00:03:31.777  ++ export SPDK_RUN_EXTERNAL_DPDK
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_RUN_NON_ROOT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_CRYPTO
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_FTL
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_OCF
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_VMD
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_OPAL
00:03:31.777  ++ :
00:03:31.777  ++ export SPDK_TEST_NATIVE_DPDK
00:03:31.777  ++ : true
00:03:31.777  ++ export SPDK_AUTOTEST_X
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_URING
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_USDT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_USE_IGB_UIO
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_SCHEDULER
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_SCANBUILD
00:03:31.777  ++ :
00:03:31.777  ++ export SPDK_TEST_NVMF_NICS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_SMA
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_DAOS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_XNVME
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ACCEL
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ACCEL_DSA
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_ACCEL_IAA
00:03:31.777  ++ :
00:03:31.777  ++ export SPDK_TEST_FUZZER_TARGET
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVMF_MDNS
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_JSONRPC_GO_CLIENT
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_SETUP
00:03:31.777  ++ : 0
00:03:31.777  ++ export SPDK_TEST_NVME_INTERRUPT
00:03:31.777  ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:03:31.777  ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib
00:03:31.777  ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:03:31.777  ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib
00:03:31.777  ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:03:31.777  ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:03:31.777  ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:03:31.777  ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib
00:03:31.777  ++ export PCI_BLOCK_SYNC_ON_RESET=yes
00:03:31.777  ++ PCI_BLOCK_SYNC_ON_RESET=yes
00:03:31.777  ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:03:31.777  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:03:31.777  ++ export PYTHONDONTWRITEBYTECODE=1
00:03:31.777  ++ PYTHONDONTWRITEBYTECODE=1
00:03:31.777  ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:03:31.777  ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0
00:03:31.777  ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:03:31.777  ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134
00:03:31.777  ++ asan_suppression_file=/var/tmp/asan_suppression_file
00:03:31.777  ++ rm -rf /var/tmp/asan_suppression_file
00:03:31.777  ++ cat
00:03:31.777  ++ echo leak:libfuse3.so
00:03:31.777  ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:03:31.777  ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file
00:03:31.777  ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:03:31.777  ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock
00:03:31.777  ++ '[' -z /var/spdk/dependencies ']'
00:03:31.777  ++ export DEPENDENCY_DIR
00:03:31.777  ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:03:31.777  ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin
00:03:31.777  ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:03:31.777  ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples
00:03:31.777  ++ export QEMU_BIN=
00:03:31.777  ++ QEMU_BIN=
00:03:31.777  ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:03:31.777  ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64'
00:03:31.777  ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:03:31.777  ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer
00:03:31.777  ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:31.777  ++ UNBIND_ENTIRE_IOMMU_GROUP=yes
00:03:31.778  ++ _LCOV_MAIN=0
00:03:31.778  ++ _LCOV_LLVM=1
00:03:31.778  ++ _LCOV=
00:03:31.778  ++ [[ /usr/bin/clang == *clang* ]]
00:03:31.778  ++ _LCOV=1
00:03:31.778  ++ _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:03:31.778  ++ _lcov_opt[_LCOV_MAIN]=
00:03:31.778  ++ lcov_opt='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh'
00:03:31.778  ++ '[' 0 -eq 0 ']'
00:03:31.778  ++ export valgrind=
00:03:31.778  ++ valgrind=
00:03:31.778  +++ uname -s
00:03:31.778  ++ '[' FreeBSD = Linux ']'
00:03:31.778  +++ uname -s
00:03:31.778  ++ '[' FreeBSD = FreeBSD ']'
00:03:31.778  ++ MAKE=gmake
00:03:31.778  +++ sysctl -a
00:03:31.778  +++ awk '{print $2}'
00:03:31.778  +++ grep -E -i hw.ncpu
00:03:31.778  ++ MAKEFLAGS=-j10
00:03:31.778  ++ HUGEMEM=2048
00:03:31.778  ++ export HUGEMEM=2048
00:03:31.778  ++ HUGEMEM=2048
00:03:31.778  ++ NO_HUGE=()
00:03:31.778  ++ TEST_MODE=
00:03:31.778  ++ [[ -z '' ]]
00:03:31.778  ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:03:31.778  ++ exec
00:03:31.778  ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins
00:03:31.778  ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server
00:03:31.778  ++ set_test_storage 2147483648
00:03:31.778  ++ [[ -v testdir ]]
00:03:31.778  ++ local requested_size=2147483648
00:03:31.778  ++ local mount target_dir
00:03:31.778  ++ local -A mounts fss sizes avails uses
00:03:31.778  ++ local source fs size avail mount use
00:03:31.778  ++ local storage_fallback storage_candidates
00:03:31.778  +++ mktemp -udt spdk.XXXXXX
00:03:31.778  ++ storage_fallback=/tmp/spdk.XXXXXX.bL9iFWKS6C
00:03:31.778  ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback")
00:03:31.778  ++ [[ -n '' ]]
00:03:31.778  ++ [[ -n '' ]]
00:03:31.778  ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.bL9iFWKS6C/tests/unit /tmp/spdk.XXXXXX.bL9iFWKS6C
00:03:31.778  ++ requested_size=2214592512
00:03:31.778  ++ read -r source fs size use avail _ mount
00:03:31.778  +++ df -T
00:03:31.778  +++ grep -v Filesystem
00:03:32.036  ++ mounts["$mount"]=/dev/gptid/183dd0be-692b-11ef-854b-001e67babf3d
00:03:32.036  ++ fss["$mount"]=ufs
00:03:32.036  ++ avails["$mount"]=16875855872
00:03:32.036  ++ sizes["$mount"]=31182712832
00:03:32.036  ++ uses["$mount"]=11812241408
00:03:32.036  ++ read -r source fs size use avail _ mount
00:03:32.036  ++ mounts["$mount"]=devfs
00:03:32.036  ++ fss["$mount"]=devfs
00:03:32.036  ++ avails["$mount"]=1024
00:03:32.036  ++ sizes["$mount"]=1024
00:03:32.036  ++ uses["$mount"]=0
00:03:32.036  ++ read -r source fs size use avail _ mount
00:03:32.036  ++ mounts["$mount"]=tmpfs
00:03:32.036  ++ fss["$mount"]=tmpfs
00:03:32.036  ++ avails["$mount"]=2147438592
00:03:32.036  ++ sizes["$mount"]=2147483648
00:03:32.036  ++ uses["$mount"]=45056
00:03:32.036  ++ read -r source fs size use avail _ mount
00:03:32.036  ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output
00:03:32.036  ++ fss["$mount"]=fusefs.sshfs
00:03:32.036  ++ avails["$mount"]=95137533952
00:03:32.036  ++ sizes["$mount"]=105088212992
00:03:32.036  ++ uses["$mount"]=4565245952
00:03:32.036  ++ read -r source fs size use avail _ mount
00:03:32.036  ++ printf '* Looking for test storage...\n'
00:03:32.036  * Looking for test storage...
00:03:32.036  ++ local target_space new_size
00:03:32.036  ++ for target_dir in "${storage_candidates[@]}"
00:03:32.036  +++ df /home/vagrant/spdk_repo/spdk/test/unit
00:03:32.036  +++ awk '$1 !~ /Filesystem/{print $6}'
00:03:32.036  ++ mount=/
00:03:32.036  ++ target_space=16875855872
00:03:32.036  ++ (( target_space == 0 || target_space < requested_size ))
00:03:32.036  ++ (( target_space >= requested_size ))
00:03:32.036  ++ [[ ufs == tmpfs ]]
00:03:32.036  ++ [[ ufs == ramfs ]]
00:03:32.036  ++ [[ / == / ]]
00:03:32.036  ++ new_size=14026833920
00:03:32.036  ++ (( new_size * 100 / sizes[/] > 95 ))
00:03:32.036  ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:03:32.036  ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit
00:03:32.036  ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit
00:03:32.036  * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit
00:03:32.036  ++ return 0
00:03:32.036  ++ set -o errtrace
00:03:32.036  ++ shopt -s extdebug
00:03:32.036  ++ trap 'trap - ERR; print_backtrace >&2' ERR
00:03:32.036  ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ '
00:03:32.036    10:18:23 unittest -- common/autotest_common.sh@1703 -- # true
00:03:32.036    10:18:23 unittest -- common/autotest_common.sh@1705 -- # xtrace_fd
00:03:32.036    10:18:23 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]]
00:03:32.036    10:18:23 unittest -- common/autotest_common.sh@29 -- # exec
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@31 -- # xtrace_restore
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]'
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 ))
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@18 -- # set -x
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:03:32.037     10:18:23 unittest -- common/autotest_common.sh@1711 -- # lcov --version
00:03:32.037     10:18:23 unittest -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:03:32.037    10:18:23 unittest -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@333 -- # local ver1 ver1_l
00:03:32.037    10:18:24 unittest -- scripts/common.sh@334 -- # local ver2 ver2_l
00:03:32.037    10:18:24 unittest -- scripts/common.sh@336 -- # IFS=.-:
00:03:32.037    10:18:24 unittest -- scripts/common.sh@336 -- # read -ra ver1
00:03:32.037    10:18:24 unittest -- scripts/common.sh@337 -- # IFS=.-:
00:03:32.037    10:18:24 unittest -- scripts/common.sh@337 -- # read -ra ver2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@338 -- # local 'op=<'
00:03:32.037    10:18:24 unittest -- scripts/common.sh@340 -- # ver1_l=2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@341 -- # ver2_l=1
00:03:32.037    10:18:24 unittest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:03:32.037    10:18:24 unittest -- scripts/common.sh@344 -- # case "$op" in
00:03:32.037    10:18:24 unittest -- scripts/common.sh@345 -- # : 1
00:03:32.037    10:18:24 unittest -- scripts/common.sh@364 -- # (( v = 0 ))
00:03:32.037    10:18:24 unittest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:03:32.037     10:18:24 unittest -- scripts/common.sh@365 -- # decimal 1
00:03:32.037     10:18:24 unittest -- scripts/common.sh@353 -- # local d=1
00:03:32.037     10:18:24 unittest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:03:32.037     10:18:24 unittest -- scripts/common.sh@355 -- # echo 1
00:03:32.037    10:18:24 unittest -- scripts/common.sh@365 -- # ver1[v]=1
00:03:32.037     10:18:24 unittest -- scripts/common.sh@366 -- # decimal 2
00:03:32.037     10:18:24 unittest -- scripts/common.sh@353 -- # local d=2
00:03:32.037     10:18:24 unittest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:03:32.037     10:18:24 unittest -- scripts/common.sh@355 -- # echo 2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@366 -- # ver2[v]=2
00:03:32.037    10:18:24 unittest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:03:32.037    10:18:24 unittest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:03:32.037    10:18:24 unittest -- scripts/common.sh@368 -- # return 0
00:03:32.037    10:18:24 unittest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:03:32.037    10:18:24 unittest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:03:32.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:32.037  		--rc genhtml_branch_coverage=1
00:03:32.037  		--rc genhtml_function_coverage=1
00:03:32.037  		--rc genhtml_legend=1
00:03:32.037  		--rc geninfo_all_blocks=1
00:03:32.037  		--rc geninfo_unexecuted_blocks=1
00:03:32.037  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:32.037  		'
00:03:32.037    10:18:24 unittest -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:03:32.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:32.037  		--rc genhtml_branch_coverage=1
00:03:32.037  		--rc genhtml_function_coverage=1
00:03:32.037  		--rc genhtml_legend=1
00:03:32.037  		--rc geninfo_all_blocks=1
00:03:32.037  		--rc geninfo_unexecuted_blocks=1
00:03:32.037  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:32.037  		'
00:03:32.037    10:18:24 unittest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:03:32.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:32.037  		--rc genhtml_branch_coverage=1
00:03:32.037  		--rc genhtml_function_coverage=1
00:03:32.037  		--rc genhtml_legend=1
00:03:32.037  		--rc geninfo_all_blocks=1
00:03:32.037  		--rc geninfo_unexecuted_blocks=1
00:03:32.037  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:32.037  		'
00:03:32.037    10:18:24 unittest -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:03:32.037  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:03:32.037  		--rc genhtml_branch_coverage=1
00:03:32.037  		--rc genhtml_function_coverage=1
00:03:32.037  		--rc genhtml_legend=1
00:03:32.037  		--rc geninfo_all_blocks=1
00:03:32.037  		--rc geninfo_unexecuted_blocks=1
00:03:32.037  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:03:32.037  		'
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@159 -- # '[' 0 -eq 1 ']'
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@166 -- # '[' -z x ']'
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@173 -- # '[' 0 -eq 1 ']'
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@182 -- # [[ y == y ]]
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@183 -- # UT_COVERAGE=/home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@184 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:03:32.037   10:18:24 unittest -- unit/unittest.sh@186 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -i -d . -t Baseline -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info
00:03:35.314  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvmf/mdns_server.gcno
00:03:43.434    10:18:34 unittest -- unit/unittest.sh@190 -- # uname -m
00:03:43.434   10:18:34 unittest -- unit/unittest.sh@190 -- # '[' amd64 = aarch64 ']'
00:03:43.434   10:18:34 unittest -- unit/unittest.sh@194 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:43.434  ************************************
00:03:43.434  START TEST unittest_pci_event
00:03:43.434  ************************************
00:03:43.434   10:18:34 unittest.unittest_pci_event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut
00:03:43.434  
00:03:43.434  
00:03:43.434       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.434       http://cunit.sourceforge.net/
00:03:43.434  
00:03:43.434  
00:03:43.434  Suite: pci_event
00:03:43.434    Test: test_pci_parse_event ...passed
00:03:43.434  
00:03:43.434  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.434                suites      1      1    n/a      0        0
00:03:43.434                 tests      1      1      1      0        0
00:03:43.434               asserts      1      1      1      0      n/a
00:03:43.434  
00:03:43.434  Elapsed time =    0.000 seconds
00:03:43.434  
00:03:43.434  real	0m0.016s
00:03:43.434  user	0m0.000s
00:03:43.434  sys	0m0.008s
00:03:43.434   10:18:34 unittest.unittest_pci_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:43.434   10:18:34 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x
00:03:43.434  ************************************
00:03:43.434  END TEST unittest_pci_event
00:03:43.434  ************************************
00:03:43.434   10:18:34 unittest -- unit/unittest.sh@195 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:43.434  ************************************
00:03:43.434  START TEST unittest_include
00:03:43.434  ************************************
00:03:43.434   10:18:34 unittest.unittest_include -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut
00:03:43.434  
00:03:43.434  
00:03:43.434       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.434       http://cunit.sourceforge.net/
00:03:43.434  
00:03:43.434  
00:03:43.434  Suite: histogram
00:03:43.434    Test: histogram_test ...passed
00:03:43.434    Test: histogram_merge ...passed
00:03:43.434  
00:03:43.434  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.434                suites      1      1    n/a      0        0
00:03:43.434                 tests      2      2      2      0        0
00:03:43.434               asserts     50     50     50      0      n/a
00:03:43.434  
00:03:43.434  Elapsed time =    0.000 seconds
00:03:43.434  
00:03:43.434  real	0m0.007s
00:03:43.434  user	0m0.000s
00:03:43.434  sys	0m0.008s
00:03:43.434  ************************************
00:03:43.434  END TEST unittest_include
00:03:43.434  ************************************
00:03:43.434   10:18:34 unittest.unittest_include -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:43.434   10:18:34 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x
00:03:43.434   10:18:34 unittest -- unit/unittest.sh@196 -- # run_test unittest_bdev unittest_bdev
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:43.434   10:18:34 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:43.434  ************************************
00:03:43.434  START TEST unittest_bdev
00:03:43.434  ************************************
00:03:43.434   10:18:34 unittest.unittest_bdev -- common/autotest_common.sh@1129 -- # unittest_bdev
00:03:43.434   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut
00:03:43.434  
00:03:43.434  
00:03:43.434       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.434       http://cunit.sourceforge.net/
00:03:43.434  
00:03:43.434  
00:03:43.434  Suite: bdev
00:03:43.434    Test: bytes_to_blocks_test ...passed
00:03:43.434    Test: num_blocks_test ...passed
00:03:43.434    Test: io_valid_test ...passed
00:03:43.434    Test: open_write_test ...[2024-12-09 10:18:34.719945] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.720089] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.720106] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  passed
00:03:43.435    Test: claim_test ...passed
00:03:43.435    Test: alias_add_del_test ...[2024-12-09 10:18:34.721765] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4934:bdev_name_add: *ERROR*: Bdev name bdev0 already exists
00:03:43.435  [2024-12-09 10:18:34.721790] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4964:spdk_bdev_alias_add: *ERROR*: Empty alias passed
00:03:43.435  [2024-12-09 10:18:34.721798] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4934:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists
00:03:43.435  passed
00:03:43.435    Test: get_device_stat_test ...passed
00:03:43.435    Test: bdev_io_types_test ...passed
00:03:43.435    Test: bdev_io_wait_test ...passed
00:03:43.435    Test: bdev_io_spans_split_test ...passed
00:03:43.435    Test: bdev_io_boundary_split_test ...passed
00:03:43.435    Test: bdev_io_max_size_and_segment_split_test ...[2024-12-09 10:18:34.725714] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3402:_bdev_rw_split: *ERROR*: The first child io was less than a block size
00:03:43.435  passed
00:03:43.435    Test: bdev_io_mix_split_test ...passed
00:03:43.435    Test: bdev_io_split_with_io_wait ...passed
00:03:43.435    Test: bdev_io_write_unit_split_test ...[2024-12-09 10:18:34.728490] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2944:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:03:43.435  [2024-12-09 10:18:34.728515] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2944:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32
00:03:43.435  [2024-12-09 10:18:34.728522] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2944:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32
00:03:43.435  [2024-12-09 10:18:34.728532] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2944:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64
00:03:43.435  passed
00:03:43.435    Test: bdev_io_alignment_with_boundary ...[2024-12-09 10:18:34.729604] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8185, expected 8192
00:03:43.435  [2024-12-09 10:18:34.729627] iobuf.c: 196:iobuf_node_free: *ERROR*: large iobuf pool count is 1015, expected 1024
00:03:43.435  passed
00:03:43.435    Test: bdev_io_alignment ...[2024-12-09 10:18:34.730550] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8184, expected 8192
00:03:43.435  passed
00:03:43.435    Test: bdev_histograms ...passed
00:03:43.435    Test: bdev_write_zeroes ...passed
00:03:43.435    Test: bdev_compare_and_write ...[2024-12-09 10:18:34.733605] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8190, expected 8192
00:03:43.435  passed
00:03:43.435    Test: bdev_compare ...passed
00:03:43.435    Test: bdev_compare_emulated ...[2024-12-09 10:18:34.735974] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8188, expected 8192
00:03:43.435  [2024-12-09 10:18:34.736778] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8187, expected 8192
00:03:43.435  passed
00:03:43.435    Test: bdev_zcopy_write ...passed
00:03:43.435    Test: bdev_zcopy_read ...passed
00:03:43.435    Test: bdev_open_while_hotremove ...passed
00:03:43.435    Test: bdev_close_while_hotremove ...passed
00:03:43.435    Test: bdev_open_ext_test ...[2024-12-09 10:18:34.738608] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:spdk_bdev_open_ext_v2: *ERROR*: Missing event callback function
00:03:43.435  passed
00:03:43.435    Test: bdev_open_ext_unregister ...passed
00:03:43.435    Test: bdev_set_io_timeout ...[2024-12-09 10:18:34.738635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8698:spdk_bdev_open_ext_v2: *ERROR*: Missing event callback function
00:03:43.435  passed
00:03:43.435    Test: bdev_set_qd_sampling ...passed
00:03:43.435    Test: lba_range_overlap ...passed
00:03:43.435    Test: lock_lba_range_check_ranges ...passed
00:03:43.435    Test: lock_lba_range_with_io_outstanding ...passed
00:03:43.435    Test: lock_lba_range_overlapped ...passed
00:03:43.435    Test: bdev_quiesce ...[2024-12-09 10:18:34.743343] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10684:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found.
00:03:43.435  passed
00:03:43.435    Test: bdev_io_abort ...passed
00:03:43.435    Test: bdev_unmap ...passed
00:03:43.435    Test: bdev_write_zeroes_split_test ...passed
00:03:43.435    Test: bdev_set_options_test ...passed
00:03:43.435    Test: bdev_get_memory_domains ...passed
00:03:43.435    Test: bdev_io_ext ...[2024-12-09 10:18:34.746020] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 506:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value
00:03:43.435  passed
00:03:43.435    Test: bdev_io_ext_no_opts ...passed
00:03:43.435    Test: bdev_io_ext_invalid_opts ...passed
00:03:43.435    Test: bdev_io_ext_split ...passed
00:03:43.435    Test: bdev_io_ext_bounce_buffer ...[2024-12-09 10:18:34.749850] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8188, expected 8192
00:03:43.435  passed
00:03:43.435    Test: bdev_register_uuid_alias ...[2024-12-09 10:18:34.750610] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4934:bdev_name_add: *ERROR*: Bdev name f29a4184-b616-11ef-9b05-d5e34e08fe3b already exists
00:03:43.435  [2024-12-09 10:18:34.750629] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8150:bdev_register: *ERROR*: Unable to add uuid:f29a4184-b616-11ef-9b05-d5e34e08fe3b alias for bdev bdev0
00:03:43.435  passed
00:03:43.435    Test: bdev_unregister_by_name ...passed
00:03:43.435    Test: for_each_bdev_test ...[2024-12-09 10:18:34.750848] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8407:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1
00:03:43.435  [2024-12-09 10:18:34.750855] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8416:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module.
00:03:43.435  passed
00:03:43.435    Test: bdev_seek_test ...passed
00:03:43.435    Test: bdev_copy ...[2024-12-09 10:18:34.752342] iobuf.c: 196:iobuf_node_free: *ERROR*: large iobuf pool count is 1023, expected 1024
00:03:43.435  passed
00:03:43.435    Test: bdev_copy_split_test ...[2024-12-09 10:18:34.753187] iobuf.c: 191:iobuf_node_free: *ERROR*: small iobuf pool count is 8190, expected 8192
00:03:43.435  passed
00:03:43.435    Test: examine_locks ...passed
00:03:43.435    Test: claim_v2_rwo ...[2024-12-09 10:18:34.753455] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753464] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9239:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753470] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753479] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753488] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  passed
00:03:43.435    Test: claim_v2_rom ...[2024-12-09 10:18:34.753502] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9235:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims
00:03:43.435  [2024-12-09 10:18:34.753538] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753549] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:03:43.435  passed
00:03:43.435    Test: claim_v2_rwm ...passed
00:03:43.435    Test: claim_v2_existing_writer ...[2024-12-09 10:18:34.753558] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753570] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753578] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9277:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims
00:03:43.435  [2024-12-09 10:18:34.753584] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9273:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:03:43.435  [2024-12-09 10:18:34.753603] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9308:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:03:43.435  [2024-12-09 10:18:34.753615] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8511:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753623] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753628] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753634] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753640] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9327:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753647] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9308:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims
00:03:43.435  passed
00:03:43.435    Test: claim_v2_existing_v1 ...passed
00:03:43.435    Test: claim_v1_existing_v2 ...[2024-12-09 10:18:34.753662] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9273:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:03:43.435  [2024-12-09 10:18:34.753668] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9273:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor
00:03:43.435  [2024-12-09 10:18:34.753682] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753687] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753693] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut
00:03:43.435  passed
00:03:43.435    Test: examine_claimed ...[2024-12-09 10:18:34.753728] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.753739] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9076:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut
00:03:43.435  [2024-12-09 10:18:34.754320] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:03:43.435  passed
00:03:43.435    Test: examine_claimed_manual ...[2024-12-09 10:18:34.755099] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9404:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1
00:03:43.435  passed
00:03:43.435    Test: get_numa_id ...passed
00:03:43.435    Test: get_device_stat_with_reset ...passed
00:03:43.435    Test: open_ext_v2_test ...passed
00:03:43.435    Test: bdev_io_init_dif_ctx_test ...passed
00:03:43.435  
00:03:43.435  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.435                suites      1      1    n/a      0        0
00:03:43.435                 tests     64     64     64      0        0
00:03:43.435               asserts   4718   4718   4718      0      n/a
00:03:43.435  
00:03:43.435  Elapsed time =    0.039 seconds
00:03:43.436  [2024-12-09 10:18:34.756150] dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:43.436   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
00:03:43.436  
00:03:43.436  
00:03:43.436       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.436       http://cunit.sourceforge.net/
00:03:43.436  
00:03:43.436  
00:03:43.436  Suite: nvme
00:03:43.436    Test: test_create_ctrlr ...passed
00:03:43.436    Test: test_reset_ctrlr ...[2024-12-09 10:18:34.761596] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_race_between_reset_and_destruct_ctrlr ...passed
00:03:43.436    Test: test_failover_ctrlr ...passed
00:03:43.436    Test: test_race_between_failover_and_add_secondary_trid ...[2024-12-09 10:18:34.761826] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.761843] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.761853] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_pending_reset ...[2024-12-09 10:18:34.761949] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.761965] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_attach_ctrlr ...passed
00:03:43.436    Test: test_aer_cb ...[2024-12-09 10:18:34.762015] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed
00:03:43.436  passed
00:03:43.436    Test: test_submit_nvme_cmd ...passed
00:03:43.436    Test: test_add_remove_trid ...passed
00:03:43.436    Test: test_abort ...[2024-12-09 10:18:34.762206] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7986:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure.
00:03:43.436  passed
00:03:43.436    Test: test_get_io_qpair ...passed
00:03:43.436    Test: test_bdev_unregister ...passed
00:03:43.436    Test: test_compare_ns ...passed
00:03:43.436    Test: test_init_ana_log_page ...passed
00:03:43.436    Test: test_get_memory_domains ...passed
00:03:43.436    Test: test_reconnect_qpair ...passed
00:03:43.436    Test: test_create_bdev_ctrlr ...[2024-12-09 10:18:34.762358] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 17] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_add_multi_ns_to_bdev ...[2024-12-09 10:18:34.762388] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5777:bdev_nvme_check_multipath: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 18] cntlid 18 are duplicated.
00:03:43.436  [2024-12-09 10:18:34.762467] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4928:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical.
00:03:43.436  passed
00:03:43.436    Test: test_add_multi_io_paths_to_nbdev_ch ...passed
00:03:43.436    Test: test_admin_path ...passed
00:03:43.436    Test: test_reset_bdev_ctrlr ...[2024-12-09 10:18:34.762715] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762732] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762742] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762775] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762795] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762810] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762838] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762847] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762863] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762868] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762879] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 33] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.762885] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 32] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_find_io_path ...passed
00:03:43.436    Test: test_retry_io_if_ana_state_is_updating ...passed
00:03:43.436    Test: test_retry_io_for_io_path_error ...passed
00:03:43.436    Test: test_retry_io_count ...passed
00:03:43.436    Test: test_concurrent_read_ana_log_page ...passed
00:03:43.436    Test: test_retry_io_for_ana_error ...passed
00:03:43.436    Test: test_check_io_error_resiliency_params ...[2024-12-09 10:18:34.763079] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6624:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1.
00:03:43.436  [2024-12-09 10:18:34.763088] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6628:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:03:43.436  [2024-12-09 10:18:34.763097] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6637:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0.
00:03:43.436  [2024-12-09 10:18:34.763107] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6640:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec.
00:03:43.436  [2024-12-09 10:18:34.763118] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6652:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:03:43.436  [2024-12-09 10:18:34.763123] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6652:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0.
00:03:43.436  [2024-12-09 10:18:34.763128] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6632:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec.
00:03:43.436  [2024-12-09 10:18:34.763134] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6647:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec.
00:03:43.436  passed
00:03:43.436    Test: test_retry_io_if_ctrlr_is_resetting ...passed
00:03:43.436    Test: test_reconnect_ctrlr ...[2024-12-09 10:18:34.763139] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6644:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec.
00:03:43.436  [2024-12-09 10:18:34.763191] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763206] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763227] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_retry_failover_ctrlr ...passed
00:03:43.436    Test: test_fail_path ...[2024-12-09 10:18:34.763236] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763244] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763272] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763324] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763337] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_nvme_ns_cmp ...passed
00:03:43.436    Test: test_ana_transition ...passed
00:03:43.436    Test: test_set_preferred_path ...passed
00:03:43.436    Test: test_find_next_io_path ...passed
00:03:43.436    Test: test_find_io_path_min_qd ...passed
00:03:43.436    Test: test_disable_auto_failback ...[2024-12-09 10:18:34.763346] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763355] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763364] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 41] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.763473] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 45] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_set_multipath_policy ...passed
00:03:43.436    Test: test_uuid_generation ...passed
00:03:43.436    Test: test_retry_io_to_same_path ...passed
00:03:43.436    Test: test_race_between_reset_and_disconnected ...passed
00:03:43.436    Test: test_ctrlr_op_rpc ...passed
00:03:43.436    Test: test_bdev_ctrlr_op_rpc ...passed
00:03:43.436    Test: test_disable_enable_ctrlr ...[2024-12-09 10:18:34.782520] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  [2024-12-09 10:18:34.782572] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Resetting controller failed.
00:03:43.436  passed
00:03:43.436    Test: test_delete_ctrlr_done ...passed
00:03:43.436    Test: test_ns_remove_during_reset ...passed
00:03:43.436    Test: test_io_path_is_current ...passed
00:03:43.436    Test: test_bdev_reset_abort_io ...passed
00:03:43.436    Test: test_race_between_clear_pending_resets_and_reset_ctrlr_complete ...passed
00:03:43.436  
00:03:43.436  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.436                suites      1      1    n/a      0        0
00:03:43.436                 tests     51     51     51      0        0
00:03:43.436               asserts   4017   4017   4017      0      n/a
00:03:43.436  
00:03:43.436  Elapsed time =    0.008 seconds
00:03:43.436   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
00:03:43.436  
00:03:43.436  
00:03:43.436       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.436       http://cunit.sourceforge.net/
00:03:43.436  
00:03:43.436  Test Options
00:03:43.436  blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2
00:03:43.436  
00:03:43.436  Suite: raid
00:03:43.436    Test: test_create_raid ...passed
00:03:43.436    Test: test_create_raid_superblock ...passed
00:03:43.436    Test: test_delete_raid ...passed
00:03:43.436    Test: test_create_raid_invalid_args ...[2024-12-09 10:18:34.791099] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1521:_raid_bdev_create: *ERROR*: Unsupported raid level '-1'
00:03:43.436  [2024-12-09 10:18:34.791213] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1515:_raid_bdev_create: *ERROR*: Invalid strip size 1231
00:03:43.437  [2024-12-09 10:18:34.791264] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1505:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1
00:03:43.437  [2024-12-09 10:18:34.791280] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:03:43.437  [2024-12-09 10:18:34.791286] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:03:43.437  [2024-12-09 10:18:34.791369] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3321:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed
00:03:43.437  [2024-12-09 10:18:34.791374] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3501:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null)
00:03:43.437  passed
00:03:43.437    Test: test_delete_raid_invalid_args ...passed
00:03:43.437    Test: test_io_channel ...passed
00:03:43.437    Test: test_reset_io ...passed
00:03:43.437    Test: test_multi_raid ...passed
00:03:43.437    Test: test_io_type_supported ...passed
00:03:43.437    Test: test_raid_json_dump_info ...passed
00:03:43.437    Test: test_context_size ...passed
00:03:43.437    Test: test_raid_level_conversions ...passed
00:03:43.437    Test: test_raid_io_split ...passed
00:03:43.437    Test: test_raid_process ...passed
00:03:43.437    Test: test_raid_process_with_qos ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests     15     15     15      0        0
00:03:43.437               asserts   6602   6602   6602      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: raid_sb
00:03:43.437    Test: test_raid_bdev_write_superblock ...passed
00:03:43.437    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:03:43.437    Test: test_raid_bdev_parse_superblock ...[2024-12-09 10:18:34.796550] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:03:43.437  passed
00:03:43.437  Suite: raid_sb_md
00:03:43.437    Test: test_raid_bdev_write_superblock ...passed
00:03:43.437    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:03:43.437    Test: test_raid_bdev_parse_superblock ...passed
00:03:43.437  Suite: raid_sb_md_interleaved
00:03:43.437    Test: test_raid_bdev_write_superblock ...passed
00:03:43.437    Test: test_raid_bdev_load_base_bdev_superblock ...passed
00:03:43.437    Test: test_raid_bdev_parse_superblock ...passed[2024-12-09 10:18:34.796741] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:03:43.437  [2024-12-09 10:18:34.796823] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev
00:03:43.437  
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      3      3    n/a      0        0
00:03:43.437                 tests      9      9      9      0        0
00:03:43.437               asserts    139    139    139      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: concat
00:03:43.437    Test: test_concat_start ...passed
00:03:43.437    Test: test_concat_rw ...passed
00:03:43.437    Test: test_concat_null_payload ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests      3      3      3      0        0
00:03:43.437               asserts   8460   8460   8460      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: raid0
00:03:43.437    Test: test_write_io ...passed
00:03:43.437    Test: test_read_io ...passed
00:03:43.437    Test: test_unmap_io ...passed
00:03:43.437    Test: test_io_failure ...passed
00:03:43.437  Suite: raid0_dif
00:03:43.437    Test: test_write_io ...passed
00:03:43.437    Test: test_read_io ...passed
00:03:43.437    Test: test_unmap_io ...passed
00:03:43.437    Test: test_io_failure ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      2      2    n/a      0        0
00:03:43.437                 tests      8      8      8      0        0
00:03:43.437               asserts 368291 368291 368291      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.008 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: raid1
00:03:43.437    Test: test_raid1_start ...passed
00:03:43.437    Test: test_raid1_read_balancing ...passed
00:03:43.437    Test: test_raid1_write_error ...passed
00:03:43.437    Test: test_raid1_read_error ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests      4      4      4      0        0
00:03:43.437               asserts   4374   4374   4374      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: zone
00:03:43.437    Test: test_zone_get_operation ...passed
00:03:43.437    Test: test_bdev_zone_get_info ...passed
00:03:43.437    Test: test_bdev_zone_management ...passed
00:03:43.437    Test: test_bdev_zone_append ...passed
00:03:43.437    Test: test_bdev_zone_append_with_md ...passed
00:03:43.437    Test: test_bdev_zone_appendv ...passed
00:03:43.437    Test: test_bdev_zone_appendv_with_md ...passed
00:03:43.437    Test: test_bdev_io_get_append_location ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests      8      8      8      0        0
00:03:43.437               asserts     94     94     94      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: gpt_parse
00:03:43.437    Test: test_parse_mbr_and_primary ...[2024-12-09 10:18:34.830307] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:03:43.437  [2024-12-09 10:18:34.830858] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:03:43.437  [2024-12-09 10:18:34.831020] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:03:43.437  [2024-12-09 10:18:34.831034] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:03:43.437  [2024-12-09 10:18:34.831050] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:03:43.437  [2024-12-09 10:18:34.831063] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:03:43.437  passed
00:03:43.437    Test: test_parse_secondary ...[2024-12-09 10:18:34.831403] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873
00:03:43.437  [2024-12-09 10:18:34.831427] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header
00:03:43.437  [2024-12-09 10:18:34.831439] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128
00:03:43.437  [2024-12-09 10:18:34.831448] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions
00:03:43.437  passed
00:03:43.437    Test: test_check_mbr ...passed
00:03:43.437    Test: test_read_header ...passed
00:03:43.437    Test: test_read_partitions ...[2024-12-09 10:18:34.831563] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:03:43.437  [2024-12-09 10:18:34.831573] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL
00:03:43.437  [2024-12-09 10:18:34.831585] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600
00:03:43.437  [2024-12-09 10:18:34.831591] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438
00:03:43.437  [2024-12-09 10:18:34.831597] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match
00:03:43.437  [2024-12-09 10:18:34.831603] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1)
00:03:43.437  [2024-12-09 10:18:34.831609] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0)
00:03:43.437  [2024-12-09 10:18:34.831614] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error
00:03:43.437  [2024-12-09 10:18:34.831622] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128
00:03:43.437  [2024-12-09 10:18:34.831628] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80)
00:03:43.437  [2024-12-09 10:18:34.831633] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c:  59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough
00:03:43.437  [2024-12-09 10:18:34.831638] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf
00:03:43.437  [2024-12-09 10:18:34.831697] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match
00:03:43.437  passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests      5      5      5      0        0
00:03:43.437               asserts     33     33     33      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.000 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: bdev_part
00:03:43.437    Test: part_test ...[2024-12-09 10:18:34.838445] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4934:bdev_name_add: *ERROR*: Bdev name b3a28e7f-7f0d-1958-b8a8-e65e6ba27a8e already exists
00:03:43.437  passed
00:03:43.437    Test: part_free_test ...[2024-12-09 10:18:34.838608] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8150:bdev_register: *ERROR*: Unable to add uuid:b3a28e7f-7f0d-1958-b8a8-e65e6ba27a8e alias for bdev test1
00:03:43.437  passed
00:03:43.437    Test: part_get_io_channel_test ...passed
00:03:43.437    Test: part_construct_ext ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.437                 tests      4      4      4      0        0
00:03:43.437               asserts     48     48     48      0      n/a
00:03:43.437  
00:03:43.437  Elapsed time =    0.008 seconds
00:03:43.437   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut
00:03:43.437  
00:03:43.437  
00:03:43.437       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.437       http://cunit.sourceforge.net/
00:03:43.437  
00:03:43.437  
00:03:43.437  Suite: scsi_nvme_suite
00:03:43.437    Test: scsi_nvme_translate_test ...passed
00:03:43.437  
00:03:43.437  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.437                suites      1      1    n/a      0        0
00:03:43.438                 tests      1      1      1      0        0
00:03:43.438               asserts    104    104    104      0      n/a
00:03:43.438  
00:03:43.438  Elapsed time =    0.000 seconds
00:03:43.438   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut
00:03:43.438  
00:03:43.438  
00:03:43.438       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.438       http://cunit.sourceforge.net/
00:03:43.438  
00:03:43.438  
00:03:43.438  Suite: lvol
00:03:43.438    Test: ut_lvs_init ...passed
00:03:43.438    Test: ut_lvol_init ...passed
00:03:43.438    Test: ut_lvol_snapshot ...passed
00:03:43.438    Test: ut_lvol_clone ...passed
00:03:43.438    Test: ut_lvs_destroy ...passed
00:03:43.438    Test: ut_lvs_unload ...passed
00:03:43.438    Test: ut_lvol_resize ...[2024-12-09 10:18:34.847482] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev
00:03:43.438  [2024-12-09 10:18:34.847586] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 275:vbdev_lvs_create_ext: *ERROR*: Cannot create blobstore device
00:03:43.438  passed
00:03:43.438    Test: ut_lvol_set_read_only ...passed
00:03:43.438    Test: ut_lvol_hotremove ...passed
00:03:43.438    Test: ut_vbdev_lvol_get_io_channel ...passed
00:03:43.438    Test: ut_vbdev_lvol_io_type_supported ...passed
00:03:43.438    Test: ut_lvol_read_write ...passed
00:03:43.438    Test: ut_vbdev_lvol_submit_request ...passed
00:03:43.438    Test: ut_lvol_examine_config ...[2024-12-09 10:18:34.847632] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1422:vbdev_lvol_resize: *ERROR*: lvol does not exist
00:03:43.438  passed
00:03:43.438    Test: ut_lvol_examine_disk ...passed
00:03:43.438    Test: ut_lvol_rename ...passed
00:03:43.438    Test: ut_bdev_finish ...passed
00:03:43.438    Test: ut_lvs_rename ...passed
00:03:43.438    Test: ut_lvol_seek ...passed
00:03:43.438    Test: ut_esnap_dev_create ...passed
00:03:43.438    Test: ut_lvol_esnap_clone_bad_args ...[2024-12-09 10:18:34.847670] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1564:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID
00:03:43.438  [2024-12-09 10:18:34.847694] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name'
00:03:43.438  [2024-12-09 10:18:34.847700] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1372:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed
00:03:43.438  [2024-12-09 10:18:34.847722] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1907:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID
00:03:43.438  [2024-12-09 10:18:34.847727] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1913:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36)
00:03:43.438  [2024-12-09 10:18:34.847733] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1918:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID
00:03:43.438  [2024-12-09 10:18:34.847749] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1308:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified
00:03:43.438  passed
00:03:43.438    Test: ut_lvol_shallow_copy ...passed
00:03:43.438    Test: ut_lvol_set_external_parent ...passed
00:03:43.438  
00:03:43.438  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.438                suites      1      1    n/a      0        0
00:03:43.438                 tests     23     23     23      0        0
00:03:43.438               asserts    770    770    770      0      n/a
00:03:43.438  
00:03:43.438  Elapsed time =    0.000 seconds
00:03:43.438  [2024-12-09 10:18:34.847756] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1315:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:03:43.438  [2024-12-09 10:18:34.847770] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2005:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:03:43.438  [2024-12-09 10:18:34.847775] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2010:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL
00:03:43.438  [2024-12-09 10:18:34.847784] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2065:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19
00:03:43.438   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut
00:03:43.438  
00:03:43.438  
00:03:43.438       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.438       http://cunit.sourceforge.net/
00:03:43.438  
00:03:43.438  
00:03:43.438  Suite: zone_block
00:03:43.438    Test: test_zone_block_create ...passed
00:03:43.438    Test: test_zone_block_create_invalid ...[2024-12-09 10:18:34.855858] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed
00:03:43.438  [2024-12-09 10:18:34.856003] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-09 10:18:34.856019] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev
00:03:43.438  passed
00:03:43.438    Test: test_get_zone_info ...[2024-12-09 10:18:34.856029] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-12-09 10:18:34.856040] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 861:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0
00:03:43.438  [2024-12-09 10:18:34.856048] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-12-09 10:18:34.856057] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 866:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0
00:03:43.438  [2024-12-09 10:18:34.856064] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c:  58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-12-09 10:18:34.856105] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856115] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  passed
00:03:43.438    Test: test_supported_io_types ...passed
00:03:43.438    Test: test_reset_zone ...passed
00:03:43.438    Test: test_open_zone ...[2024-12-09 10:18:34.856125] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856165] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856175] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856199] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  passed
00:03:43.438    Test: test_zone_write ...[2024-12-09 10:18:34.856317] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856325] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856353] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:03:43.438  [2024-12-09 10:18:34.856362] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856372] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:03:43.438  [2024-12-09 10:18:34.856383] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856687] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405)
00:03:43.438  [2024-12-09 10:18:34.856704] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.856714] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405)
00:03:43.438  [2024-12-09 10:18:34.856723] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  [2024-12-09 10:18:34.857039] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:03:43.438  [2024-12-09 10:18:34.857055] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.438  passed
00:03:43.438    Test: test_zone_read ...[2024-12-09 10:18:34.857082] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10)
00:03:43.438  [2024-12-09 10:18:34.857091] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857101] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000)
00:03:43.439  [2024-12-09 10:18:34.857109] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857135] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10)
00:03:43.439  passed
00:03:43.439    Test: test_close_zone ...passed
00:03:43.439    Test: test_finish_zone ...[2024-12-09 10:18:34.857144] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857167] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857177] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857201] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857211] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  passed
00:03:43.439    Test: test_append_zone ...[2024-12-09 10:18:34.857246] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857257] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857277] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2
00:03:43.439  [2024-12-09 10:18:34.857284] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857293] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000)
00:03:43.439  [2024-12-09 10:18:34.857301] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  [2024-12-09 10:18:34.857912] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0)
00:03:43.439  [2024-12-09 10:18:34.857928] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission!
00:03:43.439  passed
00:03:43.439  
00:03:43.439  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.439                suites      1      1    n/a      0        0
00:03:43.439                 tests     11     11     11      0        0
00:03:43.439               asserts   3437   3437   3437      0      n/a
00:03:43.439  
00:03:43.439  Elapsed time =    0.008 seconds
00:03:43.439   10:18:34 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut
00:03:43.439  
00:03:43.439  
00:03:43.439       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.439       http://cunit.sourceforge.net/
00:03:43.439  
00:03:43.439  
00:03:43.439  Suite: bdev
00:03:43.439    Test: basic ...[2024-12-09 10:18:34.864283] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x293499): Operation not permitted (rc=-1)
00:03:43.439  [2024-12-09 10:18:34.864415] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x16e04cc6a480 (0x293490): Operation not permitted (rc=-1)
00:03:43.439  [2024-12-09 10:18:34.864428] thread.c:2418:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x293499): Operation not permitted (rc=-1)
00:03:43.439  passed
00:03:43.439    Test: unregister_and_close ...passed
00:03:43.439    Test: unregister_and_close_different_threads ...passed
00:03:43.439    Test: basic_qos ...passed
00:03:43.439    Test: put_channel_during_reset ...passed
00:03:43.439    Test: aborted_reset ...passed
00:03:43.439    Test: aborted_reset_no_outstanding_io ...passed
00:03:43.439    Test: io_during_reset ...passed
00:03:43.439    Test: reset_completions ...passed
00:03:43.439    Test: io_during_qos_queue ...passed
00:03:43.439    Test: io_during_qos_reset ...passed
00:03:43.439    Test: enomem ...passed
00:03:43.439    Test: enomem_multi_bdev ...passed
00:03:43.439    Test: enomem_multi_bdev_unregister ...passed
00:03:43.439    Test: enomem_multi_io_target ...passed
00:03:43.439    Test: qos_dynamic_enable ...passed
00:03:43.439    Test: bdev_histograms_mt ...passed
00:03:43.439    Test: bdev_set_io_timeout_mt ...passed
00:03:43.439    Test: lock_lba_range_then_submit_io ...[2024-12-09 10:18:34.885624] thread.c: 493:spdk_thread_lib_fini: *ERROR*: io_device 0x16e04cc6a600 not unregistered
00:03:43.439  [2024-12-09 10:18:34.886278] thread.c:2222:spdk_io_device_register: *ERROR*: io_device 0x293474 already registered (old:0x16e04cc6a600 new:0x16e04cc6a780)
00:03:43.439  passed
00:03:43.439    Test: unregister_during_reset ...passed
00:03:43.439    Test: event_notify_and_close ...passed
00:03:43.439    Test: unregister_and_qos_poller ...passed
00:03:43.439    Test: reset_start_complete_race ...passed
00:03:43.439  Suite: bdev_wrong_thread
00:03:43.439    Test: spdk_bdev_register_wt ...passed
00:03:43.439    Test: spdk_bdev_examine_wt ...passed
00:03:43.439  
00:03:43.439  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:43.439                suites      2      2    n/a      0        0
00:03:43.439                 tests     25     25     25      0        0
00:03:43.439               asserts    637    637    637      0      n/a
00:03:43.439  
00:03:43.439  Elapsed time =    0.031 seconds
00:03:43.439  [2024-12-09 10:18:34.891208] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:9034:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x16e04cc37800 (0x16e04cc37800)
00:03:43.439  [2024-12-09 10:18:34.891241] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 836:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x16e04cc37800 (0x16e04cc37800)
00:03:43.439  
00:03:43.439  real	0m0.181s
00:03:43.439  user	0m0.134s
00:03:43.439  sys	0m0.049s
00:03:43.439   10:18:34 unittest.unittest_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:43.439   10:18:34 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x
00:03:43.439  ************************************
00:03:43.439  END TEST unittest_bdev
00:03:43.439  ************************************
00:03:43.439   10:18:34 unittest -- unit/unittest.sh@197 -- # [[ n == y ]]
00:03:43.439   10:18:34 unittest -- unit/unittest.sh@202 -- # [[ n == y ]]
00:03:43.439   10:18:34 unittest -- unit/unittest.sh@207 -- # [[ n == y ]]
00:03:43.439   10:18:34 unittest -- unit/unittest.sh@211 -- # [[ n == y ]]
00:03:43.439   10:18:34 unittest -- unit/unittest.sh@215 -- # run_test unittest_blob_blobfs unittest_blob
00:03:43.439   10:18:34 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:43.439   10:18:34 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:43.439   10:18:34 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:43.439  ************************************
00:03:43.439  START TEST unittest_blob_blobfs
00:03:43.439  ************************************
00:03:43.439   10:18:34 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1129 -- # unittest_blob
00:03:43.439   10:18:34 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]]
00:03:43.439   10:18:34 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut
00:03:43.439  
00:03:43.439  
00:03:43.439       CUnit - A unit testing framework for C - Version 2.1-3
00:03:43.439       http://cunit.sourceforge.net/
00:03:43.439  
00:03:43.439  
00:03:43.439  Suite: blob_nocopy_noextent
00:03:43.439    Test: blob_init ...[2024-12-09 10:18:34.927693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5528:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:03:43.439  passed
00:03:43.439    Test: blob_thin_provision ...passed
00:03:43.439    Test: blob_read_only ...passed
00:03:43.439    Test: bs_load ...[2024-12-09 10:18:34.969827] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 975:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:03:43.439  passed
00:03:43.439    Test: bs_load_custom_cluster_size ...passed
00:03:43.439    Test: bs_load_after_failed_grow ...passed
00:03:43.439    Test: bs_load_error ...passed
00:03:43.439    Test: bs_cluster_sz ...[2024-12-09 10:18:34.985516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:03:43.439  [2024-12-09 10:18:34.985557] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5663:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:03:43.439  [2024-12-09 10:18:34.985567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3840:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:03:43.439  passed
00:03:43.439    Test: bs_resize_md ...passed
00:03:43.439    Test: bs_destroy ...passed
00:03:43.439    Test: bs_type ...passed
00:03:43.439    Test: bs_super_block ...passed
00:03:43.439    Test: bs_test_recover_cluster_count ...passed
00:03:43.439    Test: bs_grow_live ...passed
00:03:43.439    Test: bs_grow_live_no_space ...passed
00:03:43.439    Test: bs_test_grow ...passed
00:03:43.439    Test: blob_serialize_test ...passed
00:03:43.439    Test: super_block_crc ...passed
00:03:43.439    Test: blob_thin_prov_write_count_io ...passed
00:03:43.439    Test: blob_thin_prov_unmap_cluster ...passed
00:03:43.439    Test: bs_load_iter_test ...passed
00:03:43.439    Test: blob_relations ...[2024-12-09 10:18:35.072566] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.072611] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  [2024-12-09 10:18:35.072692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.072698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  passed
00:03:43.439    Test: blob_relations2 ...[2024-12-09 10:18:35.078937] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.078968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  [2024-12-09 10:18:35.078975] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.078981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  [2024-12-09 10:18:35.079096] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.079103] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  [2024-12-09 10:18:35.079130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:43.439  [2024-12-09 10:18:35.079136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.439  passed
00:03:43.440    Test: blob_relations3 ...passed
00:03:43.440    Test: blobstore_clean_power_failure ...passed
00:03:43.440    Test: blob_delete_snapshot_power_failure ...[2024-12-09 10:18:35.153058] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:43.440  [2024-12-09 10:18:35.158428] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:43.440  [2024-12-09 10:18:35.158459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:43.440  [2024-12-09 10:18:35.158466] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.440  [2024-12-09 10:18:35.164156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:43.440  [2024-12-09 10:18:35.164187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:43.440  [2024-12-09 10:18:35.164193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:43.440  [2024-12-09 10:18:35.164199] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.440  [2024-12-09 10:18:35.170112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:03:43.440  [2024-12-09 10:18:35.170143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.440  [2024-12-09 10:18:35.175977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:03:43.440  [2024-12-09 10:18:35.176008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.440  [2024-12-09 10:18:35.181769] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:03:43.440  [2024-12-09 10:18:35.181799] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.440  passed
00:03:43.440    Test: blob_create_snapshot_power_failure ...[2024-12-09 10:18:35.198446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:43.440  [2024-12-09 10:18:35.208444] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:43.440  [2024-12-09 10:18:35.213584] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:03:43.440  passed
00:03:43.440    Test: blob_io_unit ...passed
00:03:43.440    Test: blob_io_unit_compatibility ...passed
00:03:43.440    Test: blob_ext_md_pages ...passed
00:03:43.440    Test: blob_esnap_io_4096_4096 ...passed
00:03:43.440    Test: blob_esnap_io_512_512 ...passed
00:03:43.440    Test: blob_esnap_io_4096_512 ...passed
00:03:43.440    Test: blob_esnap_io_512_4096 ...passed
00:03:43.440    Test: blob_esnap_clone_resize ...passed
00:03:43.440  Suite: blob_bs_nocopy_noextent
00:03:43.440    Test: blob_open ...passed
00:03:43.440    Test: blob_create ...[2024-12-09 10:18:35.315459] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:03:43.440  passed
00:03:43.440    Test: blob_create_loop ...passed
00:03:43.440    Test: blob_create_fail ...[2024-12-09 10:18:35.352051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:43.440  passed
00:03:43.440    Test: blob_create_internal ...passed
00:03:43.440    Test: blob_create_zero_extent ...passed
00:03:43.440    Test: blob_snapshot ...passed
00:03:43.440    Test: blob_clone ...passed
00:03:43.440    Test: blob_inflate ...[2024-12-09 10:18:35.425899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:03:43.440  passed
00:03:43.440    Test: blob_delete ...passed
00:03:43.440    Test: blob_resize_test ...[2024-12-09 10:18:35.453790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:03:43.440  passed
00:03:43.440    Test: blob_resize_thin_test ...passed
00:03:43.440    Test: channel_ops ...passed
00:03:43.440    Test: blob_super ...passed
00:03:43.440    Test: blob_rw_verify_iov ...passed
00:03:43.440    Test: blob_unmap ...passed
00:03:43.440    Test: blob_iter ...passed
00:03:43.440    Test: blob_parse_md ...passed
00:03:43.440    Test: bs_load_pending_removal ...passed
00:03:43.440    Test: bs_unload ...[2024-12-09 10:18:35.588762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:03:43.701  passed
00:03:43.701    Test: bs_usable_clusters ...passed
00:03:43.701    Test: blob_crc ...[2024-12-09 10:18:35.619575] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:43.701  [2024-12-09 10:18:35.619615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:43.701  passed
00:03:43.701    Test: blob_flags ...passed
00:03:43.701    Test: bs_version ...passed
00:03:43.701    Test: blob_set_xattrs_test ...[2024-12-09 10:18:35.663023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:43.701  [2024-12-09 10:18:35.663061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:43.701  passed
00:03:43.701    Test: blob_thin_prov_alloc ...passed
00:03:43.701    Test: blob_insert_cluster_msg_test ...passed
00:03:43.701    Test: blob_thin_prov_rw ...passed
00:03:43.701    Test: blob_thin_prov_rle ...passed
00:03:43.701    Test: blob_thin_prov_rw_iov ...passed
00:03:43.701    Test: blob_snapshot_rw ...passed
00:03:43.701    Test: blob_snapshot_rw_iov ...passed
00:03:43.701    Test: blob_inflate_rw ...passed
00:03:43.701    Test: blob_snapshot_freeze_io ...passed
00:03:43.960    Test: blob_operation_split_rw ...passed
00:03:43.960    Test: blob_operation_split_rw_iov ...passed
00:03:43.960    Test: blob_simultaneous_operations ...[2024-12-09 10:18:35.948724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:43.960  [2024-12-09 10:18:35.948772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.960  [2024-12-09 10:18:35.948956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:43.960  [2024-12-09 10:18:35.948968] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.960  [2024-12-09 10:18:35.951660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:43.960  [2024-12-09 10:18:35.951681] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.960  [2024-12-09 10:18:35.951698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:43.960  [2024-12-09 10:18:35.951704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:43.960  passed
00:03:43.960    Test: blob_persist_test ...passed
00:03:43.960    Test: blob_decouple_snapshot ...passed
00:03:43.960    Test: blob_seek_io_unit ...passed
00:03:43.960    Test: blob_nested_freezes ...passed
00:03:43.960    Test: blob_clone_resize ...passed
00:03:43.960    Test: blob_shallow_copy ...[2024-12-09 10:18:36.068458] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:03:43.960  [2024-12-09 10:18:36.068510] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7386:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:03:43.960  [2024-12-09 10:18:36.068519] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7394:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:03:43.960  passed
00:03:43.960  Suite: blob_blob_nocopy_noextent
00:03:43.960    Test: blob_write ...passed
00:03:43.960    Test: blob_read ...passed
00:03:44.218    Test: blob_rw_verify ...passed
00:03:44.218    Test: blob_rw_verify_iov_nomem ...passed
00:03:44.218    Test: blob_rw_iov_read_only ...passed
00:03:44.218    Test: blob_xattr ...passed
00:03:44.218    Test: blob_dirty_shutdown ...passed
00:03:44.218    Test: blob_is_degraded ...passed
00:03:44.218  Suite: blob_esnap_bs_nocopy_noextent
00:03:44.218    Test: blob_esnap_create ...passed
00:03:44.218    Test: blob_esnap_thread_add_remove ...passed
00:03:44.218    Test: blob_esnap_clone_snapshot ...passed
00:03:44.218    Test: blob_esnap_clone_inflate ...passed
00:03:44.218    Test: blob_esnap_clone_decouple ...passed
00:03:44.218    Test: blob_esnap_clone_reload ...passed
00:03:44.218    Test: blob_esnap_hotplug ...passed
00:03:44.218    Test: blob_set_parent ...[2024-12-09 10:18:36.300641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:03:44.218  [2024-12-09 10:18:36.300682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:03:44.218  [2024-12-09 10:18:36.300698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:03:44.218  [2024-12-09 10:18:36.300712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:03:44.218  [2024-12-09 10:18:36.300757] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:44.218  passed
00:03:44.218    Test: blob_set_external_parent ...[2024-12-09 10:18:36.321918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:03:44.219  [2024-12-09 10:18:36.321966] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7840:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:03:44.219  [2024-12-09 10:18:36.321981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:03:44.219  [2024-12-09 10:18:36.322023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:44.219  passed
00:03:44.219  Suite: blob_nocopy_extent
00:03:44.219    Test: blob_init ...[2024-12-09 10:18:36.327667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5528:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:03:44.219  passed
00:03:44.219    Test: blob_thin_provision ...passed
00:03:44.219    Test: blob_read_only ...passed
00:03:44.219    Test: bs_load ...[2024-12-09 10:18:36.349245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 975:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:03:44.219  passed
00:03:44.219    Test: bs_load_custom_cluster_size ...passed
00:03:44.219    Test: bs_load_after_failed_grow ...passed
00:03:44.219    Test: bs_load_error ...passed
00:03:44.219    Test: bs_cluster_sz ...[2024-12-09 10:18:36.364478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:03:44.219  [2024-12-09 10:18:36.364521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5663:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:03:44.219  [2024-12-09 10:18:36.364530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3840:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:03:44.219  passed
00:03:44.219    Test: bs_resize_md ...passed
00:03:44.477    Test: bs_destroy ...passed
00:03:44.477    Test: bs_type ...passed
00:03:44.477    Test: bs_super_block ...passed
00:03:44.477    Test: bs_test_recover_cluster_count ...passed
00:03:44.477    Test: bs_grow_live ...passed
00:03:44.477    Test: bs_grow_live_no_space ...passed
00:03:44.477    Test: bs_test_grow ...passed
00:03:44.477    Test: blob_serialize_test ...passed
00:03:44.477    Test: super_block_crc ...passed
00:03:44.477    Test: blob_thin_prov_write_count_io ...passed
00:03:44.477    Test: blob_thin_prov_unmap_cluster ...passed
00:03:44.477    Test: bs_load_iter_test ...passed
00:03:44.477    Test: blob_relations ...[2024-12-09 10:18:36.449366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.449416] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.449498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.449505] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  passed
00:03:44.477    Test: blob_relations2 ...[2024-12-09 10:18:36.455731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.455764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.455771] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.455776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.455885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.455891] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.455921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:44.477  [2024-12-09 10:18:36.455927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  passed
00:03:44.477    Test: blob_relations3 ...passed
00:03:44.477    Test: blobstore_clean_power_failure ...passed
00:03:44.477    Test: blob_delete_snapshot_power_failure ...[2024-12-09 10:18:36.523642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:44.477  [2024-12-09 10:18:36.528955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:44.477  [2024-12-09 10:18:36.533943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:44.477  [2024-12-09 10:18:36.533974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:44.477  [2024-12-09 10:18:36.533981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.539241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:44.477  [2024-12-09 10:18:36.539302] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:44.477  [2024-12-09 10:18:36.539308] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:44.477  [2024-12-09 10:18:36.539314] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.544767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:44.477  [2024-12-09 10:18:36.544790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:44.477  [2024-12-09 10:18:36.544796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:44.477  [2024-12-09 10:18:36.544802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.550537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:03:44.477  [2024-12-09 10:18:36.550567] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.556307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:03:44.477  [2024-12-09 10:18:36.556366] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  [2024-12-09 10:18:36.562339] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:03:44.477  [2024-12-09 10:18:36.562368] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:44.477  passed
00:03:44.477    Test: blob_create_snapshot_power_failure ...[2024-12-09 10:18:36.579370] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:44.477  [2024-12-09 10:18:36.584641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:44.477  [2024-12-09 10:18:36.594529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:44.477  [2024-12-09 10:18:36.599732] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:03:44.477  passed
00:03:44.477    Test: blob_io_unit ...passed
00:03:44.477    Test: blob_io_unit_compatibility ...passed
00:03:44.477    Test: blob_ext_md_pages ...passed
00:03:44.736    Test: blob_esnap_io_4096_4096 ...passed
00:03:44.736    Test: blob_esnap_io_512_512 ...passed
00:03:44.736    Test: blob_esnap_io_4096_512 ...passed
00:03:44.736    Test: blob_esnap_io_512_4096 ...passed
00:03:44.736    Test: blob_esnap_clone_resize ...passed
00:03:44.736  Suite: blob_bs_nocopy_extent
00:03:44.736    Test: blob_open ...passed
00:03:44.736    Test: blob_create ...[2024-12-09 10:18:36.699872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:03:44.736  passed
00:03:44.736    Test: blob_create_loop ...passed
00:03:44.736    Test: blob_create_fail ...[2024-12-09 10:18:36.736140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:44.736  passed
00:03:44.736    Test: blob_create_internal ...passed
00:03:44.736    Test: blob_create_zero_extent ...passed
00:03:44.736    Test: blob_snapshot ...passed
00:03:44.736    Test: blob_clone ...passed
00:03:44.736    Test: blob_inflate ...[2024-12-09 10:18:36.812193] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:03:44.736  passed
00:03:44.736    Test: blob_delete ...passed
00:03:44.736    Test: blob_resize_test ...[2024-12-09 10:18:36.840755] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:03:44.736  passed
00:03:44.736    Test: blob_resize_thin_test ...passed
00:03:44.736    Test: channel_ops ...passed
00:03:44.736    Test: blob_super ...passed
00:03:44.994    Test: blob_rw_verify_iov ...passed
00:03:44.994    Test: blob_unmap ...passed
00:03:44.994    Test: blob_iter ...passed
00:03:44.994    Test: blob_parse_md ...passed
00:03:44.994    Test: bs_load_pending_removal ...passed
00:03:44.994    Test: bs_unload ...[2024-12-09 10:18:36.973380] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:03:44.994  passed
00:03:44.994    Test: bs_usable_clusters ...passed
00:03:44.994    Test: blob_crc ...[2024-12-09 10:18:37.002863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:44.994  [2024-12-09 10:18:37.002904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:44.994  passed
00:03:44.994    Test: blob_flags ...passed
00:03:44.994    Test: bs_version ...passed
00:03:44.994    Test: blob_set_xattrs_test ...[2024-12-09 10:18:37.045497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:44.994  [2024-12-09 10:18:37.045536] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:44.994  passed
00:03:44.994    Test: blob_thin_prov_alloc ...passed
00:03:44.994    Test: blob_insert_cluster_msg_test ...passed
00:03:44.994    Test: blob_thin_prov_rw ...passed
00:03:44.994    Test: blob_thin_prov_rle ...passed
00:03:44.994    Test: blob_thin_prov_rw_iov ...passed
00:03:44.994    Test: blob_snapshot_rw ...passed
00:03:45.253    Test: blob_snapshot_rw_iov ...passed
00:03:45.253    Test: blob_inflate_rw ...passed
00:03:45.253    Test: blob_snapshot_freeze_io ...passed
00:03:45.253    Test: blob_operation_split_rw ...passed
00:03:45.253    Test: blob_operation_split_rw_iov ...passed
00:03:45.253    Test: blob_simultaneous_operations ...[2024-12-09 10:18:37.319571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:45.253  [2024-12-09 10:18:37.319614] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.253  [2024-12-09 10:18:37.319774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:45.253  [2024-12-09 10:18:37.319784] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.253  [2024-12-09 10:18:37.322405] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:45.253  [2024-12-09 10:18:37.322431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.253  [2024-12-09 10:18:37.322447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:45.253  [2024-12-09 10:18:37.322453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.253  passed
00:03:45.253    Test: blob_persist_test ...passed
00:03:45.253    Test: blob_decouple_snapshot ...passed
00:03:45.253    Test: blob_seek_io_unit ...passed
00:03:45.253    Test: blob_nested_freezes ...passed
00:03:45.512    Test: blob_clone_resize ...passed
00:03:45.512    Test: blob_shallow_copy ...[2024-12-09 10:18:37.425153] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:03:45.512  [2024-12-09 10:18:37.425203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7386:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:03:45.512  [2024-12-09 10:18:37.425212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7394:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:03:45.512  passed
00:03:45.512  Suite: blob_blob_nocopy_extent
00:03:45.512    Test: blob_write ...passed
00:03:45.512    Test: blob_read ...passed
00:03:45.512    Test: blob_rw_verify ...passed
00:03:45.512    Test: blob_rw_verify_iov_nomem ...passed
00:03:45.512    Test: blob_rw_iov_read_only ...passed
00:03:45.512    Test: blob_xattr ...passed
00:03:45.512    Test: blob_dirty_shutdown ...passed
00:03:45.512    Test: blob_is_degraded ...passed
00:03:45.512  Suite: blob_esnap_bs_nocopy_extent
00:03:45.512    Test: blob_esnap_create ...passed
00:03:45.512    Test: blob_esnap_thread_add_remove ...passed
00:03:45.512    Test: blob_esnap_clone_snapshot ...passed
00:03:45.512    Test: blob_esnap_clone_inflate ...passed
00:03:45.512    Test: blob_esnap_clone_decouple ...passed
00:03:45.512    Test: blob_esnap_clone_reload ...passed
00:03:45.512    Test: blob_esnap_hotplug ...passed
00:03:45.512    Test: blob_set_parent ...[2024-12-09 10:18:37.659070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:03:45.512  [2024-12-09 10:18:37.659112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:03:45.513  [2024-12-09 10:18:37.659128] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:03:45.513  [2024-12-09 10:18:37.659136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:03:45.513  [2024-12-09 10:18:37.659181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:45.513  passed
00:03:45.771    Test: blob_set_external_parent ...[2024-12-09 10:18:37.675259] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:03:45.771  [2024-12-09 10:18:37.675296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7840:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:03:45.771  [2024-12-09 10:18:37.675304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:03:45.771  [2024-12-09 10:18:37.675376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:45.771  passed
00:03:45.771  Suite: blob_nocopy_extent_16k_phys
00:03:45.771    Test: blob_init ...[2024-12-09 10:18:37.680645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5528:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:03:45.771  passed
00:03:45.771    Test: blob_thin_provision ...passed
00:03:45.771    Test: blob_read_only ...passed
00:03:45.771    Test: bs_load ...[2024-12-09 10:18:37.701506] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 975:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:03:45.771  passed
00:03:45.771    Test: bs_load_custom_cluster_size ...passed
00:03:45.771    Test: bs_load_after_failed_grow ...passed
00:03:45.771    Test: bs_load_error ...passed
00:03:45.771    Test: bs_cluster_sz ...[2024-12-09 10:18:37.716996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:03:45.772  [2024-12-09 10:18:37.717035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5663:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:03:45.772  [2024-12-09 10:18:37.717044] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3840:bs_opts_verify: *ERROR*: Cluster size 16383 is not an integral multiple of blocklen 4096
00:03:45.772  passed
00:03:45.772    Test: bs_resize_md ...passed
00:03:45.772    Test: bs_destroy ...passed
00:03:45.772    Test: bs_type ...passed
00:03:45.772    Test: bs_super_block ...passed
00:03:45.772    Test: bs_test_recover_cluster_count ...passed
00:03:45.772    Test: bs_grow_live ...passed
00:03:45.772    Test: bs_grow_live_no_space ...passed
00:03:45.772    Test: bs_test_grow ...passed
00:03:45.772    Test: blob_serialize_test ...passed
00:03:45.772    Test: super_block_crc ...passed
00:03:45.772    Test: blob_thin_prov_write_count_io ...passed
00:03:45.772    Test: blob_thin_prov_unmap_cluster ...passed
00:03:45.772    Test: bs_load_iter_test ...passed
00:03:45.772    Test: blob_relations ...[2024-12-09 10:18:37.801581] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.801628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.801728] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.801735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  passed
00:03:45.772    Test: blob_relations2 ...[2024-12-09 10:18:37.808311] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.808343] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.808351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.808356] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.808488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.808495] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.808538] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:45.772  [2024-12-09 10:18:37.808544] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  passed
00:03:45.772    Test: blob_relations3 ...passed
00:03:45.772    Test: blobstore_clean_power_failure ...passed
00:03:45.772    Test: blob_delete_snapshot_power_failure ...[2024-12-09 10:18:37.877417] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:45.772  [2024-12-09 10:18:37.882783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:45.772  [2024-12-09 10:18:37.888104] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:45.772  [2024-12-09 10:18:37.888137] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:45.772  [2024-12-09 10:18:37.888143] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.893689] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:45.772  [2024-12-09 10:18:37.893712] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:45.772  [2024-12-09 10:18:37.893718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:45.772  [2024-12-09 10:18:37.893724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.899451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:45.772  [2024-12-09 10:18:37.899474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:45.772  [2024-12-09 10:18:37.899480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:45.772  [2024-12-09 10:18:37.899486] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.905352] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:03:45.772  [2024-12-09 10:18:37.905409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.911645] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:03:45.772  [2024-12-09 10:18:37.911676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  [2024-12-09 10:18:37.917579] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:03:45.772  [2024-12-09 10:18:37.917608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:45.772  passed
00:03:46.031    Test: blob_create_snapshot_power_failure ...[2024-12-09 10:18:37.935621] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:46.031  [2024-12-09 10:18:37.941156] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:46.031  [2024-12-09 10:18:37.951518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:46.031  [2024-12-09 10:18:37.956957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:03:46.031  passed
00:03:46.031    Test: blob_io_unit ...passed
00:03:46.031    Test: blob_io_unit_compatibility ...passed
00:03:46.031    Test: blob_ext_md_pages ...passed
00:03:46.031    Test: blob_esnap_io_4096_4096 ...passed
00:03:46.031    Test: blob_esnap_io_512_512 ...passed
00:03:46.031    Test: blob_esnap_io_4096_512 ...passed
00:03:46.031    Test: blob_esnap_io_512_4096 ...passed
00:03:46.031    Test: blob_esnap_clone_resize ...passed
00:03:46.031  Suite: blob_bs_nocopy_extent_16k_phys
00:03:46.031    Test: blob_open ...passed
00:03:46.031    Test: blob_create ...[2024-12-09 10:18:38.073865] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:03:46.031  passed
00:03:46.031    Test: blob_create_loop ...passed
00:03:46.031    Test: blob_create_fail ...[2024-12-09 10:18:38.112782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:46.031  passed
00:03:46.031    Test: blob_create_internal ...passed
00:03:46.031    Test: blob_create_zero_extent ...passed
00:03:46.031    Test: blob_snapshot ...passed
00:03:46.031    Test: blob_clone ...passed
00:03:46.289    Test: blob_inflate ...[2024-12-09 10:18:38.192541] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:03:46.289  passed
00:03:46.289    Test: blob_delete ...passed
00:03:46.289    Test: blob_resize_test ...[2024-12-09 10:18:38.223826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:03:46.289  passed
00:03:46.289    Test: blob_resize_thin_test ...passed
00:03:46.289    Test: channel_ops ...passed
00:03:46.289    Test: blob_super ...passed
00:03:46.289    Test: blob_rw_verify_iov ...passed
00:03:46.289    Test: blob_unmap ...passed
00:03:46.289    Test: blob_iter ...passed
00:03:46.289    Test: blob_parse_md ...passed
00:03:46.289    Test: bs_load_pending_removal ...passed
00:03:46.289    Test: bs_unload ...[2024-12-09 10:18:38.358163] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:03:46.289  passed
00:03:46.289    Test: bs_usable_clusters ...passed
00:03:46.289    Test: blob_crc ...[2024-12-09 10:18:38.386727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:46.289  [2024-12-09 10:18:38.386782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:46.289  passed
00:03:46.289    Test: blob_flags ...passed
00:03:46.289    Test: bs_version ...passed
00:03:46.289    Test: blob_set_xattrs_test ...[2024-12-09 10:18:38.429035] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:46.290  [2024-12-09 10:18:38.429075] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:46.290  passed
00:03:46.548    Test: blob_thin_prov_alloc ...passed
00:03:46.548    Test: blob_insert_cluster_msg_test ...passed
00:03:46.548    Test: blob_thin_prov_rw ...passed
00:03:46.548    Test: blob_thin_prov_rle ...passed
00:03:46.548    Test: blob_thin_prov_rw_iov ...passed
00:03:46.548    Test: blob_snapshot_rw ...passed
00:03:46.548    Test: blob_snapshot_rw_iov ...passed
00:03:46.548    Test: blob_inflate_rw ...passed
00:03:46.548    Test: blob_snapshot_freeze_io ...passed
00:03:46.548    Test: blob_operation_split_rw ...passed
00:03:46.548    Test: blob_operation_split_rw_iov ...passed
00:03:46.806    Test: blob_simultaneous_operations ...[2024-12-09 10:18:38.709783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:46.806  [2024-12-09 10:18:38.709826] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:46.806  [2024-12-09 10:18:38.709984] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:46.806  [2024-12-09 10:18:38.709995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:46.806  [2024-12-09 10:18:38.712287] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:46.806  [2024-12-09 10:18:38.712307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:46.806  [2024-12-09 10:18:38.712330] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:46.806  [2024-12-09 10:18:38.712336] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:46.806  passed
00:03:46.806    Test: blob_persist_test ...passed
00:03:46.806    Test: blob_decouple_snapshot ...passed
00:03:46.806    Test: blob_seek_io_unit ...passed
00:03:46.806    Test: blob_nested_freezes ...passed
00:03:46.806    Test: blob_clone_resize ...passed
00:03:46.806    Test: blob_shallow_copy ...[2024-12-09 10:18:38.815020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:03:46.806  [2024-12-09 10:18:38.815073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7386:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:03:46.806  [2024-12-09 10:18:38.815081] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7394:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:03:46.806  passed
00:03:46.806  Suite: blob_blob_nocopy_extent_16k_phys
00:03:46.806    Test: blob_write ...passed
00:03:46.806    Test: blob_read ...passed
00:03:46.806    Test: blob_rw_verify ...passed
00:03:46.806    Test: blob_rw_verify_iov_nomem ...passed
00:03:46.806    Test: blob_rw_iov_read_only ...passed
00:03:46.806    Test: blob_xattr ...passed
00:03:46.806    Test: blob_dirty_shutdown ...passed
00:03:46.806    Test: blob_is_degraded ...passed
00:03:46.806  Suite: blob_esnap_bs_nocopy_extent_16k_phys
00:03:46.806    Test: blob_esnap_create ...passed
00:03:47.065    Test: blob_esnap_thread_add_remove ...passed
00:03:47.065    Test: blob_esnap_clone_snapshot ...passed
00:03:47.065    Test: blob_esnap_clone_inflate ...passed
00:03:47.065    Test: blob_esnap_clone_decouple ...passed
00:03:47.065    Test: blob_esnap_clone_reload ...passed
00:03:47.065    Test: blob_esnap_hotplug ...passed
00:03:47.065    Test: blob_set_parent ...[2024-12-09 10:18:39.050185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:03:47.065  [2024-12-09 10:18:39.050233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:03:47.065  [2024-12-09 10:18:39.050257] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:03:47.065  [2024-12-09 10:18:39.050264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:03:47.065  [2024-12-09 10:18:39.050329] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:47.065  passed
00:03:47.065    Test: blob_set_external_parent ...[2024-12-09 10:18:39.067161] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:03:47.065  [2024-12-09 10:18:39.067195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7840:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 258048 is not an integer multiple of cluster size 65536
00:03:47.065  [2024-12-09 10:18:39.067202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:03:47.065  [2024-12-09 10:18:39.067267] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:47.065  passed
00:03:47.065  Suite: blob_copy_noextent
00:03:47.065    Test: blob_init ...[2024-12-09 10:18:39.073053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5528:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:03:47.065  passed
00:03:47.065    Test: blob_thin_provision ...passed
00:03:47.065    Test: blob_read_only ...passed
00:03:47.065    Test: bs_load ...[2024-12-09 10:18:39.094301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 975:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:03:47.065  passed
00:03:47.065    Test: bs_load_custom_cluster_size ...passed
00:03:47.065    Test: bs_load_after_failed_grow ...passed
00:03:47.065    Test: bs_load_error ...passed
00:03:47.065    Test: bs_cluster_sz ...[2024-12-09 10:18:39.109189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:03:47.065  [2024-12-09 10:18:39.109224] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5663:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:03:47.065  [2024-12-09 10:18:39.109232] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3840:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:03:47.065  passed
00:03:47.065    Test: bs_resize_md ...passed
00:03:47.065    Test: bs_destroy ...passed
00:03:47.065    Test: bs_type ...passed
00:03:47.066    Test: bs_super_block ...passed
00:03:47.066    Test: bs_test_recover_cluster_count ...passed
00:03:47.066    Test: bs_grow_live ...passed
00:03:47.066    Test: bs_grow_live_no_space ...passed
00:03:47.066    Test: bs_test_grow ...passed
00:03:47.066    Test: blob_serialize_test ...passed
00:03:47.066    Test: super_block_crc ...passed
00:03:47.066    Test: blob_thin_prov_write_count_io ...passed
00:03:47.066    Test: blob_thin_prov_unmap_cluster ...passed
00:03:47.066    Test: bs_load_iter_test ...passed
00:03:47.066    Test: blob_relations ...[2024-12-09 10:18:39.186027] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.186070] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  [2024-12-09 10:18:39.186140] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.186146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  passed
00:03:47.066    Test: blob_relations2 ...[2024-12-09 10:18:39.194411] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.194504] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  [2024-12-09 10:18:39.194531] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.194549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  [2024-12-09 10:18:39.195019] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.195062] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  [2024-12-09 10:18:39.195215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:47.066  [2024-12-09 10:18:39.195248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.066  passed
00:03:47.066    Test: blob_relations3 ...passed
00:03:47.324    Test: blobstore_clean_power_failure ...passed
00:03:47.324    Test: blob_delete_snapshot_power_failure ...[2024-12-09 10:18:39.271821] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:47.324  [2024-12-09 10:18:39.277412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:47.324  [2024-12-09 10:18:39.277454] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:47.324  [2024-12-09 10:18:39.277461] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.324  [2024-12-09 10:18:39.282775] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:47.324  [2024-12-09 10:18:39.282802] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:47.324  [2024-12-09 10:18:39.282809] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:47.324  [2024-12-09 10:18:39.282815] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.324  [2024-12-09 10:18:39.288412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:03:47.324  [2024-12-09 10:18:39.288468] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.324  [2024-12-09 10:18:39.294132] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:03:47.324  [2024-12-09 10:18:39.294166] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.324  [2024-12-09 10:18:39.300002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:03:47.324  [2024-12-09 10:18:39.300033] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:47.324  passed
00:03:47.324    Test: blob_create_snapshot_power_failure ...[2024-12-09 10:18:39.316503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:47.324  [2024-12-09 10:18:39.327085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5
00:03:47.324  [2024-12-09 10:18:39.332610] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:03:47.324  passed
00:03:47.324    Test: blob_io_unit ...passed
00:03:47.324    Test: blob_io_unit_compatibility ...passed
00:03:47.324    Test: blob_ext_md_pages ...passed
00:03:47.324    Test: blob_esnap_io_4096_4096 ...passed
00:03:47.324    Test: blob_esnap_io_512_512 ...passed
00:03:47.324    Test: blob_esnap_io_4096_512 ...passed
00:03:47.324    Test: blob_esnap_io_512_4096 ...passed
00:03:47.324    Test: blob_esnap_clone_resize ...passed
00:03:47.324  Suite: blob_bs_copy_noextent
00:03:47.324    Test: blob_open ...passed
00:03:47.324    Test: blob_create ...[2024-12-09 10:18:39.436594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:03:47.324  passed
00:03:47.324    Test: blob_create_loop ...passed
00:03:47.324    Test: blob_create_fail ...[2024-12-09 10:18:39.472136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:47.324  passed
00:03:47.582    Test: blob_create_internal ...passed
00:03:47.582    Test: blob_create_zero_extent ...passed
00:03:47.582    Test: blob_snapshot ...passed
00:03:47.582    Test: blob_clone ...passed
00:03:47.582    Test: blob_inflate ...[2024-12-09 10:18:39.544281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:03:47.582  passed
00:03:47.582    Test: blob_delete ...passed
00:03:47.582    Test: blob_resize_test ...[2024-12-09 10:18:39.573249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:03:47.582  passed
00:03:47.582    Test: blob_resize_thin_test ...passed
00:03:47.582    Test: channel_ops ...passed
00:03:47.582    Test: blob_super ...passed
00:03:47.582    Test: blob_rw_verify_iov ...passed
00:03:47.582    Test: blob_unmap ...passed
00:03:47.583    Test: blob_iter ...passed
00:03:47.583    Test: blob_parse_md ...passed
00:03:47.583    Test: bs_load_pending_removal ...passed
00:03:47.583    Test: bs_unload ...[2024-12-09 10:18:39.703651] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:03:47.583  passed
00:03:47.583    Test: bs_usable_clusters ...passed
00:03:47.583    Test: blob_crc ...[2024-12-09 10:18:39.731933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:47.583  [2024-12-09 10:18:39.731973] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:47.583  passed
00:03:47.840    Test: blob_flags ...passed
00:03:47.840    Test: bs_version ...passed
00:03:47.840    Test: blob_set_xattrs_test ...[2024-12-09 10:18:39.774306] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:47.840  [2024-12-09 10:18:39.774347] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:47.840  passed
00:03:47.840    Test: blob_thin_prov_alloc ...passed
00:03:47.840    Test: blob_insert_cluster_msg_test ...passed
00:03:47.840    Test: blob_thin_prov_rw ...passed
00:03:47.840    Test: blob_thin_prov_rle ...passed
00:03:47.840    Test: blob_thin_prov_rw_iov ...passed
00:03:47.840    Test: blob_snapshot_rw ...passed
00:03:47.840    Test: blob_snapshot_rw_iov ...passed
00:03:47.840    Test: blob_inflate_rw ...passed
00:03:47.840    Test: blob_snapshot_freeze_io ...passed
00:03:48.101    Test: blob_operation_split_rw ...passed
00:03:48.101    Test: blob_operation_split_rw_iov ...passed
00:03:48.101    Test: blob_simultaneous_operations ...[2024-12-09 10:18:40.053228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:48.101  [2024-12-09 10:18:40.053274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.101  [2024-12-09 10:18:40.053434] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:48.101  [2024-12-09 10:18:40.053446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.101  [2024-12-09 10:18:40.055086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:48.101  [2024-12-09 10:18:40.055113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.101  [2024-12-09 10:18:40.055130] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:48.101  [2024-12-09 10:18:40.055136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.101  passed
00:03:48.101    Test: blob_persist_test ...passed
00:03:48.101    Test: blob_decouple_snapshot ...passed
00:03:48.101    Test: blob_seek_io_unit ...passed
00:03:48.101    Test: blob_nested_freezes ...passed
00:03:48.101    Test: blob_clone_resize ...passed
00:03:48.101    Test: blob_shallow_copy ...[2024-12-09 10:18:40.155052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:03:48.101  [2024-12-09 10:18:40.155099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7386:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:03:48.101  [2024-12-09 10:18:40.155108] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7394:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:03:48.101  passed
00:03:48.101  Suite: blob_blob_copy_noextent
00:03:48.101    Test: blob_write ...passed
00:03:48.101    Test: blob_read ...passed
00:03:48.101    Test: blob_rw_verify ...passed
00:03:48.101    Test: blob_rw_verify_iov_nomem ...passed
00:03:48.101    Test: blob_rw_iov_read_only ...passed
00:03:48.101    Test: blob_xattr ...passed
00:03:48.101    Test: blob_dirty_shutdown ...passed
00:03:48.359    Test: blob_is_degraded ...passed
00:03:48.359  Suite: blob_esnap_bs_copy_noextent
00:03:48.359    Test: blob_esnap_create ...passed
00:03:48.359    Test: blob_esnap_thread_add_remove ...passed
00:03:48.359    Test: blob_esnap_clone_snapshot ...passed
00:03:48.359    Test: blob_esnap_clone_inflate ...passed
00:03:48.359    Test: blob_esnap_clone_decouple ...passed
00:03:48.359    Test: blob_esnap_clone_reload ...passed
00:03:48.359    Test: blob_esnap_hotplug ...passed
00:03:48.359    Test: blob_set_parent ...[2024-12-09 10:18:40.360338] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:03:48.359  [2024-12-09 10:18:40.360377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:03:48.359  [2024-12-09 10:18:40.360394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:03:48.359  [2024-12-09 10:18:40.360403] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:03:48.359  [2024-12-09 10:18:40.360446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:48.359  passed
00:03:48.359    Test: blob_set_external_parent ...[2024-12-09 10:18:40.374759] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:03:48.359  [2024-12-09 10:18:40.374803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7840:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:03:48.359  [2024-12-09 10:18:40.374811] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:03:48.359  [2024-12-09 10:18:40.374853] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:48.359  passed
00:03:48.359  Suite: blob_copy_extent
00:03:48.359    Test: blob_init ...[2024-12-09 10:18:40.380726] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5528:spdk_bs_init: *ERROR*: unsupported dev block length of 500
00:03:48.359  passed
00:03:48.359    Test: blob_thin_provision ...passed
00:03:48.359    Test: blob_read_only ...passed
00:03:48.360    Test: bs_load ...[2024-12-09 10:18:40.399977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 975:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000)
00:03:48.360  passed
00:03:48.360    Test: bs_load_custom_cluster_size ...passed
00:03:48.360    Test: bs_load_after_failed_grow ...passed
00:03:48.360    Test: bs_load_error ...passed
00:03:48.360    Test: bs_cluster_sz ...[2024-12-09 10:18:40.413126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3834:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0
00:03:48.360  [2024-12-09 10:18:40.413164] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5663:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size.
00:03:48.360  [2024-12-09 10:18:40.413173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3840:bs_opts_verify: *ERROR*: Cluster size 4095 is not an integral multiple of blocklen 4096
00:03:48.360  passed
00:03:48.360    Test: bs_resize_md ...passed
00:03:48.360    Test: bs_destroy ...passed
00:03:48.360    Test: bs_type ...passed
00:03:48.360    Test: bs_super_block ...passed
00:03:48.360    Test: bs_test_recover_cluster_count ...passed
00:03:48.360    Test: bs_grow_live ...passed
00:03:48.360    Test: bs_grow_live_no_space ...passed
00:03:48.360    Test: bs_test_grow ...passed
00:03:48.360    Test: blob_serialize_test ...passed
00:03:48.360    Test: super_block_crc ...passed
00:03:48.360    Test: blob_thin_prov_write_count_io ...passed
00:03:48.360    Test: blob_thin_prov_unmap_cluster ...passed
00:03:48.360    Test: bs_load_iter_test ...passed
00:03:48.360    Test: blob_relations ...[2024-12-09 10:18:40.479350] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.479396] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  [2024-12-09 10:18:40.479494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.479501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  passed
00:03:48.360    Test: blob_relations2 ...[2024-12-09 10:18:40.485376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.485407] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  [2024-12-09 10:18:40.485414] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.485419] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  [2024-12-09 10:18:40.485555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.485563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  [2024-12-09 10:18:40.485593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8430:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone
00:03:48.360  [2024-12-09 10:18:40.485599] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.360  passed
00:03:48.360    Test: blob_relations3 ...passed
00:03:48.618    Test: blobstore_clean_power_failure ...passed
00:03:48.618    Test: blob_delete_snapshot_power_failure ...[2024-12-09 10:18:40.547050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:48.618  [2024-12-09 10:18:40.551606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:48.618  [2024-12-09 10:18:40.556084] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:48.618  [2024-12-09 10:18:40.556112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:48.618  [2024-12-09 10:18:40.556118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  [2024-12-09 10:18:40.560649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:48.618  [2024-12-09 10:18:40.560673] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:48.618  [2024-12-09 10:18:40.560680] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:48.618  [2024-12-09 10:18:40.560686] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  [2024-12-09 10:18:40.565369] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:48.618  [2024-12-09 10:18:40.565389] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1475:blob_load_snapshot_cpl: *ERROR*: Snapshot fail
00:03:48.618  [2024-12-09 10:18:40.565395] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8344:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone
00:03:48.618  [2024-12-09 10:18:40.565401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  [2024-12-09 10:18:40.569995] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8271:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob
00:03:48.618  [2024-12-09 10:18:40.570021] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  [2024-12-09 10:18:40.574797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8140:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone
00:03:48.618  [2024-12-09 10:18:40.574825] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  [2024-12-09 10:18:40.579947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8084:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob
00:03:48.618  [2024-12-09 10:18:40.579972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:48.618  passed
00:03:48.618    Test: blob_create_snapshot_power_failure ...[2024-12-09 10:18:40.594100] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5
00:03:48.618  [2024-12-09 10:18:40.599210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1588:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5
00:03:48.618  [2024-12-09 10:18:40.607993] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5
00:03:48.618  [2024-12-09 10:18:40.612401] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6489:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5
00:03:48.618  passed
00:03:48.618    Test: blob_io_unit ...passed
00:03:48.618    Test: blob_io_unit_compatibility ...passed
00:03:48.618    Test: blob_ext_md_pages ...passed
00:03:48.618    Test: blob_esnap_io_4096_4096 ...passed
00:03:48.618    Test: blob_esnap_io_512_512 ...passed
00:03:48.618    Test: blob_esnap_io_4096_512 ...passed
00:03:48.618    Test: blob_esnap_io_512_4096 ...passed
00:03:48.618    Test: blob_esnap_clone_resize ...passed
00:03:48.618  Suite: blob_bs_copy_extent
00:03:48.618    Test: blob_open ...passed
00:03:48.618    Test: blob_create ...[2024-12-09 10:18:40.700540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters)
00:03:48.618  passed
00:03:48.618    Test: blob_create_loop ...passed
00:03:48.618    Test: blob_create_fail ...[2024-12-09 10:18:40.732051] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:48.618  passed
00:03:48.618    Test: blob_create_internal ...passed
00:03:48.618    Test: blob_create_zero_extent ...passed
00:03:48.618    Test: blob_snapshot ...passed
00:03:48.877    Test: blob_clone ...passed
00:03:48.877    Test: blob_inflate ...[2024-12-09 10:18:40.792652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7152:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent.
00:03:48.877  passed
00:03:48.877    Test: blob_delete ...passed
00:03:48.877    Test: blob_resize_test ...[2024-12-09 10:18:40.816119] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7889:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28
00:03:48.877  passed
00:03:48.877    Test: blob_resize_thin_test ...passed
00:03:48.877    Test: channel_ops ...passed
00:03:48.877    Test: blob_super ...passed
00:03:48.877    Test: blob_rw_verify_iov ...passed
00:03:48.877    Test: blob_unmap ...passed
00:03:48.877    Test: blob_iter ...passed
00:03:48.877    Test: blob_parse_md ...passed
00:03:48.877    Test: bs_load_pending_removal ...passed
00:03:48.877    Test: bs_unload ...[2024-12-09 10:18:40.924933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5929:spdk_bs_unload: *ERROR*: Blobstore still has open blobs
00:03:48.877  passed
00:03:48.877    Test: bs_usable_clusters ...passed
00:03:48.877    Test: blob_crc ...[2024-12-09 10:18:40.948198] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:48.877  [2024-12-09 10:18:40.948233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1688:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000
00:03:48.877  passed
00:03:48.877    Test: blob_flags ...passed
00:03:48.877    Test: bs_version ...passed
00:03:48.877    Test: blob_set_xattrs_test ...[2024-12-09 10:18:40.984606] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:48.877  [2024-12-09 10:18:40.984641] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6371:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters)
00:03:48.877  passed
00:03:48.877    Test: blob_thin_prov_alloc ...passed
00:03:48.877    Test: blob_insert_cluster_msg_test ...passed
00:03:48.877    Test: blob_thin_prov_rw ...passed
00:03:49.137    Test: blob_thin_prov_rle ...passed
00:03:49.137    Test: blob_thin_prov_rw_iov ...passed
00:03:49.137    Test: blob_snapshot_rw ...passed
00:03:49.137    Test: blob_snapshot_rw_iov ...passed
00:03:49.137    Test: blob_inflate_rw ...passed
00:03:49.137    Test: blob_snapshot_freeze_io ...passed
00:03:49.137    Test: blob_operation_split_rw ...passed
00:03:49.137    Test: blob_operation_split_rw_iov ...passed
00:03:49.137    Test: blob_simultaneous_operations ...[2024-12-09 10:18:41.240211] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:49.137  [2024-12-09 10:18:41.240251] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:49.137  [2024-12-09 10:18:41.240394] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:49.137  [2024-12-09 10:18:41.240404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:49.137  [2024-12-09 10:18:41.241593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:49.137  [2024-12-09 10:18:41.241608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:49.137  [2024-12-09 10:18:41.241623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8457:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open
00:03:49.137  [2024-12-09 10:18:41.241629] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8397:bs_delete_blob_finish: *ERROR*: Failed to remove blob
00:03:49.137  passed
00:03:49.137    Test: blob_persist_test ...passed
00:03:49.137    Test: blob_decouple_snapshot ...passed
00:03:49.396    Test: blob_seek_io_unit ...passed
00:03:49.396    Test: blob_nested_freezes ...passed
00:03:49.396    Test: blob_clone_resize ...passed
00:03:49.396    Test: blob_shallow_copy ...[2024-12-09 10:18:41.336144] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7375:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only
00:03:49.396  [2024-12-09 10:18:41.336195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7386:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size
00:03:49.396  [2024-12-09 10:18:41.336202] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7394:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size
00:03:49.396  passed
00:03:49.396  Suite: blob_blob_copy_extent
00:03:49.396    Test: blob_write ...passed
00:03:49.396    Test: blob_read ...passed
00:03:49.396    Test: blob_rw_verify ...passed
00:03:49.396    Test: blob_rw_verify_iov_nomem ...passed
00:03:49.396    Test: blob_rw_iov_read_only ...passed
00:03:49.396    Test: blob_xattr ...passed
00:03:49.396    Test: blob_dirty_shutdown ...passed
00:03:49.396    Test: blob_is_degraded ...passed
00:03:49.396  Suite: blob_esnap_bs_copy_extent
00:03:49.396    Test: blob_esnap_create ...passed
00:03:49.396    Test: blob_esnap_thread_add_remove ...passed
00:03:49.396    Test: blob_esnap_clone_snapshot ...passed
00:03:49.396    Test: blob_esnap_clone_inflate ...passed
00:03:49.396    Test: blob_esnap_clone_decouple ...passed
00:03:49.396    Test: blob_esnap_clone_reload ...passed
00:03:49.396    Test: blob_esnap_hotplug ...passed
00:03:49.396    Test: blob_set_parent ...[2024-12-09 10:18:41.527188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7656:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid
00:03:49.396  [2024-12-09 10:18:41.527222] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7662:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same
00:03:49.396  [2024-12-09 10:18:41.527236] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7591:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot
00:03:49.396  [2024-12-09 10:18:41.527243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7598:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones
00:03:49.396  [2024-12-09 10:18:41.527279] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7637:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:49.396  passed
00:03:49.397    Test: blob_set_external_parent ...[2024-12-09 10:18:41.539348] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7831:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same
00:03:49.397  [2024-12-09 10:18:41.539376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7840:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384
00:03:49.397  [2024-12-09 10:18:41.539381] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7792:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob
00:03:49.397  [2024-12-09 10:18:41.539443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7798:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned
00:03:49.397  passed
00:03:49.397  
00:03:49.397  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.397                suites     20     20    n/a      0        0
00:03:49.397                 tests    475    475    475      0        0
00:03:49.397               asserts 205372 205372 205372      0      n/a
00:03:49.397  
00:03:49.397  Elapsed time =    6.609 seconds
00:03:49.397   10:18:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut
00:03:49.397  
00:03:49.397  
00:03:49.397       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.397       http://cunit.sourceforge.net/
00:03:49.397  
00:03:49.397  
00:03:49.397  Suite: blob_bdev
00:03:49.397    Test: create_bs_dev ...passed
00:03:49.397    Test: create_bs_dev_ro ...[2024-12-09 10:18:41.552290] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 540:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options
00:03:49.397  passed
00:03:49.397    Test: create_bs_dev_rw ...passed
00:03:49.397    Test: claim_bs_dev ...passed
00:03:49.397    Test: claim_bs_dev_ro ...passed
00:03:49.397    Test: deferred_destroy_refs ...passed
00:03:49.397    Test: deferred_destroy_channels ...passed
00:03:49.397    Test: deferred_destroy_threads ...passed
00:03:49.397  
00:03:49.397  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.397                suites      1      1    n/a      0        0
00:03:49.397                 tests      8      8      8      0        0
00:03:49.397               asserts    119    119    119      0      n/a
00:03:49.397  
00:03:49.397  Elapsed time =    0.000 seconds
00:03:49.397  [2024-12-09 10:18:41.552454] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 350:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev
00:03:49.397   10:18:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut
00:03:49.659  
00:03:49.659  
00:03:49.659       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.659       http://cunit.sourceforge.net/
00:03:49.659  
00:03:49.659  
00:03:49.659  Suite: tree
00:03:49.659    Test: blobfs_tree_op_test ...passed
00:03:49.659  
00:03:49.659  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.659                suites      1      1    n/a      0        0
00:03:49.659                 tests      1      1      1      0        0
00:03:49.659               asserts     27     27     27      0      n/a
00:03:49.659  
00:03:49.659  Elapsed time =    0.000 seconds
00:03:49.659   10:18:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut
00:03:49.659  
00:03:49.659  
00:03:49.659       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.659       http://cunit.sourceforge.net/
00:03:49.659  
00:03:49.659  
00:03:49.659  Suite: blobfs_async_ut
00:03:49.659    Test: fs_init ...passed
00:03:49.659    Test: fs_open ...passed
00:03:49.659    Test: fs_create ...passed
00:03:49.659    Test: fs_truncate ...passed
00:03:49.659    Test: fs_rename ...passed
00:03:49.659    Test: fs_rw_async ...[2024-12-09 10:18:41.608332] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted
00:03:49.659  passed
00:03:49.659    Test: fs_writev_readv_async ...passed
00:03:49.659    Test: tree_find_buffer_ut ...passed
00:03:49.659    Test: channel_ops ...passed
00:03:49.659    Test: channel_ops_sync ...passed
00:03:49.659  
00:03:49.659  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.659                suites      1      1    n/a      0        0
00:03:49.659                 tests     10     10     10      0        0
00:03:49.659               asserts    292    292    292      0      n/a
00:03:49.659  
00:03:49.659  Elapsed time =    0.055 seconds
00:03:49.659   10:18:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut
00:03:49.659  
00:03:49.659  
00:03:49.659       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.659       http://cunit.sourceforge.net/
00:03:49.659  
00:03:49.659  
00:03:49.659  Suite: blobfs_sync_ut
00:03:49.659    Test: cache_read_after_write ...[2024-12-09 10:18:41.672037] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1480:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted
00:03:49.659  passed
00:03:49.659    Test: file_length ...passed
00:03:49.659    Test: append_write_to_extend_blob ...passed
00:03:49.659    Test: partial_buffer ...passed
00:03:49.659    Test: cache_write_null_buffer ...passed
00:03:49.659    Test: fs_create_sync ...passed
00:03:49.659    Test: fs_rename_sync ...passed
00:03:49.659    Test: cache_append_no_cache ...passed
00:03:49.659    Test: fs_delete_file_without_close ...passed
00:03:49.659  
00:03:49.659  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.659                suites      1      1    n/a      0        0
00:03:49.659                 tests      9      9      9      0        0
00:03:49.659               asserts    345    345    345      0      n/a
00:03:49.659  
00:03:49.659  Elapsed time =    0.141 seconds
00:03:49.659   10:18:41 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut
00:03:49.659  
00:03:49.659  
00:03:49.659       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.659       http://cunit.sourceforge.net/
00:03:49.659  
00:03:49.659  
00:03:49.659  Suite: blobfs_bdev_ut
00:03:49.659    Test: spdk_blobfs_bdev_detect_test ...[2024-12-09 10:18:41.720229] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:03:49.659  passed
00:03:49.659    Test: spdk_blobfs_bdev_create_test ...passed
00:03:49.659    Test: spdk_blobfs_bdev_mount_test ...passed
00:03:49.659  
00:03:49.659  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.659                suites      1      1    n/a      0        0
00:03:49.659                 tests      3      3      3      0        0
00:03:49.659               asserts      9      9      9      0      n/a
00:03:49.659  
00:03:49.659  Elapsed time =    0.000 seconds
00:03:49.659  [2024-12-09 10:18:41.720360] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c:  59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1
00:03:49.659  ************************************
00:03:49.659  END TEST unittest_blob_blobfs
00:03:49.659  ************************************
00:03:49.659  
00:03:49.659  real	0m6.799s
00:03:49.659  user	0m6.743s
00:03:49.659  sys	0m0.119s
00:03:49.659   10:18:41 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:49.659   10:18:41 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x
00:03:49.659   10:18:41 unittest -- unit/unittest.sh@216 -- # run_test unittest_event unittest_event
00:03:49.659   10:18:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:49.659   10:18:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:49.659   10:18:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:49.659  ************************************
00:03:49.659  START TEST unittest_event
00:03:49.659  ************************************
00:03:49.659   10:18:41 unittest.unittest_event -- common/autotest_common.sh@1129 -- # unittest_event
00:03:49.659   10:18:41 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut
00:03:49.659  
00:03:49.659  
00:03:49.659       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.659       http://cunit.sourceforge.net/
00:03:49.659  
00:03:49.659  
00:03:49.659  Suite: app_suite
00:03:49.659    Test: test_spdk_app_parse_args ...app_ut [options]
00:03:49.659  
00:03:49.659  CPU options:
00:03:49.659   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:03:49.659                                   (like [0,1,10])
00:03:49.659       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:03:49.659                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:03:49.659                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:03:49.659                             Within the group, '-' is used for range separator,
00:03:49.659  app_ut: invalid option -- z
00:03:49.659                             ',' is used for single number separator.
00:03:49.659                             '( )' can be omitted for single element group,
00:03:49.659                             '@' can be omitted if cpus and lcores have the same value
00:03:49.659       --disable-cpumask-locks    Disable CPU core lock files.
00:03:49.659       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:03:49.659                             pollers in the app support interrupt mode)
00:03:49.659   -p, --main-core <id>      main (primary) core for DPDK
00:03:49.659  
00:03:49.659  Configuration options:
00:03:49.659   -c, --config, --json  <config>     JSON config file
00:03:49.659   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:03:49.659       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:03:49.659       --wait-for-rpc        wait for RPCs to initialize subsystems
00:03:49.659       --rpcs-allowed	   comma-separated list of permitted RPCS
00:03:49.659       --json-ignore-init-errors    don't exit on invalid config entry
00:03:49.659  
00:03:49.659  Memory options:
00:03:49.659       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:03:49.659       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:03:49.659       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:03:49.659   -R, --huge-unlink         unlink huge files after initialization
00:03:49.659   -n, --mem-channels <num>  number of memory channels used for DPDK
00:03:49.659   -s, --mem-size <size>     memory size in MB for DPDK (default: all hugepage memory)
00:03:49.659       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:03:49.659       --no-huge             run without using hugepages
00:03:49.659       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:03:49.659   -i, --shm-id <id>         shared memory ID (optional)
00:03:49.659   -g, --single-file-segments   force creating just one hugetlbfs file
00:03:49.659  
00:03:49.659  PCI options:
00:03:49.659   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:03:49.659   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:03:49.659   -u, --no-pci              disable PCI access
00:03:49.659       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:03:49.659  
00:03:49.659  Log options:
00:03:49.659   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:03:49.659       --silence-noticelog   disable notice level logging to stderr
00:03:49.659  
00:03:49.659  Trace options:
00:03:49.659       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:03:49.659                                   setting 0 to disable trace (default 32768)
00:03:49.659                                   Tracepoints vary in size and can use more than one trace entry.
00:03:49.659   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:03:49.659                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:03:49.659                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:03:49.659                             a tracepoint group. First tpoint inside a group can be enabled by
00:03:49.659                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:03:49.660                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:03:49.660                             in /include/spdk_internal/trace_defs.h
00:03:49.660  
00:03:49.660  Other options:
00:03:49.660   -h, --help                show this usage
00:03:49.660   -v, --version             print SPDK version
00:03:49.660   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:03:49.660       --env-context         Opaque context for use of the env implementation
00:03:49.660  app_ut [options]
00:03:49.660  
00:03:49.660  CPU options:
00:03:49.660   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:03:49.660                                   (like [0,1,10])
00:03:49.660       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:03:49.660                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:03:49.660                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:03:49.660                             Within the group, '-' is used for range separator,
00:03:49.660                             ',' is used for single number separator.
00:03:49.660                             '( )' can be omitted for single element group,
00:03:49.660                             '@' can be omitted if cpus and lcores have the same value
00:03:49.660       --disable-cpumask-locks    Disable CPU core lock files.
00:03:49.660       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:03:49.660                             pollers in the app support interrupt mode)
00:03:49.660   -p, --main-core <id>      main (primary) core for DPDK
00:03:49.660  
00:03:49.660  Configuration options:
00:03:49.660   -c, --config, --json  <config>     JSON config file
00:03:49.660   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:03:49.660       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:03:49.660       --wait-for-rpc        wait for RPCs to initialize subsystems
00:03:49.660       --rpcs-allowed	   comma-separated list of permitted RPCS
00:03:49.660       --json-ignore-init-errors    don't exit on invalid config entry
00:03:49.660  
00:03:49.660  Memory options:
00:03:49.660       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:03:49.660       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:03:49.660       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:03:49.660   -R, --huge-unlink         unlink huge files after initialization
00:03:49.660   -n, --mem-channels <num>  number of memory channels used for DPDK
00:03:49.660   -s, --mem-size <size>     memory size in MB for DPDK (default: all hugepage memory)
00:03:49.660       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:03:49.660       --no-huge             run without using hugepages
00:03:49.660       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:03:49.660   -i, --shm-id <id>         shared memory ID (optional)
00:03:49.660   -g, --single-file-segments   force creating just one hugetlbfs file
00:03:49.660  
00:03:49.660  PCI options:
00:03:49.660   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:03:49.660   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:03:49.660   -u, --no-pci              disable PCI access
00:03:49.660       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:03:49.660  
00:03:49.660  Log options:
00:03:49.660   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:03:49.660       --silence-noticelog   disable notice level logging to stderr
00:03:49.660  
00:03:49.660  Trace options:
00:03:49.660       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:03:49.660                                   setting 0 to disable trace (default 32768)
00:03:49.660                                   Tracepoints vary in size and can use more than one trace entry.
00:03:49.660   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:03:49.660                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:03:49.660                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:03:49.660                             a tracepoint group. First tpoint inside a group can be enabled by
00:03:49.660                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:03:49.660                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:03:49.660                             in /include/spdk_internal/trace_defs.h
00:03:49.660  
00:03:49.660  Other options:
00:03:49.660   -h, --help                show this usage
00:03:49.660   -v, --version             print SPDK version
00:03:49.660  app_ut: unrecognized option `--test-long-opt'
00:03:49.660   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:03:49.660       --env-context         Opaque context for use of the env implementation
00:03:49.660  [2024-12-09 10:18:41.781736] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1205:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts.
00:03:49.660  app_ut [options]
00:03:49.660  
00:03:49.660  CPU options:
00:03:49.660   -m, --cpumask <mask or list>    core mask (like 0xF) or core list of '[]' embraced for DPDK
00:03:49.660                                   (like [0,1,10])
00:03:49.660       --lcores <list>       lcore to CPU mapping list. The list is in the format:
00:03:49.660                             <lcores[@CPUs]>[<,lcores[@CPUs]>...]
00:03:49.660                             lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"'
00:03:49.660                             Within the group, '-' is used for range separator,
00:03:49.660                             ',' is used for single number separator.
00:03:49.660                             '( )' can be omitted for single element group,
00:03:49.660                             '@' can be omitted if cpus and lcores have the same value
00:03:49.660       --disable-cpumask-locks    Disable CPU core lock files.
00:03:49.660       --interrupt-mode      set app to interrupt mode (Warning: CPU usage will be reduced only if all
00:03:49.660                             pollers in the app support interrupt mode)
00:03:49.660   -p, --main-core <id>      main (primary) core for DPDK
00:03:49.660  
00:03:49.660  Configuration options:
00:03:49.660   -c, --config, --json  <config>     JSON config file
00:03:49.660   -r, --rpc-socket <path>   RPC listen address (default /var/tmp/spdk.sock)
00:03:49.660  [2024-12-09 10:18:41.781985] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1388:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time
00:03:49.660       --no-rpc-server       skip RPC server initialization. This option ignores '--rpc-socket' value.
00:03:49.660       --wait-for-rpc        wait for RPCs to initialize subsystems
00:03:49.660       --rpcs-allowed	   comma-separated list of permitted RPCS
00:03:49.660       --json-ignore-init-errors    don't exit on invalid config entry
00:03:49.660  
00:03:49.660  Memory options:
00:03:49.660       --iova-mode <pa/va>   set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA)
00:03:49.660       --base-virtaddr <addr>      the base virtual address for DPDK (default: 0x200000000000)
00:03:49.660       --huge-dir <path>     use a specific hugetlbfs mount to reserve memory from
00:03:49.660   -R, --huge-unlink         unlink huge files after initialization
00:03:49.660   -n, --mem-channels <num>  number of memory channels used for DPDK
00:03:49.660   -s, --mem-size <size>     memory size in MB for DPDK (default: all hugepage memory)
00:03:49.660       --msg-mempool-size <size>  global message memory pool size in count (default: 262143)
00:03:49.660       --no-huge             run without using hugepages
00:03:49.660       --enforce-numa        enforce NUMA allocations from the specified NUMA node
00:03:49.660   -i, --shm-id <id>         shared memory ID (optional)
00:03:49.660   -g, --single-file-segments   force creating just one hugetlbfs file
00:03:49.660  
00:03:49.660  PCI options:
00:03:49.660   -A, --pci-allowed <bdf>   pci addr to allow (-B and -A cannot be used at the same time)
00:03:49.660   -B, --pci-blocked <bdf>   pci addr to block (can be used more than once)
00:03:49.660   -u, --no-pci              disable PCI access
00:03:49.660       --vfio-vf-token       VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver
00:03:49.660  
00:03:49.660  Log options:
00:03:49.660   -L, --logflag <flag>      enable log flag (all, app_rpc, json_util, rpc, thread, trace)
00:03:49.660       --silence-noticelog   disable notice level logging to stderr
00:03:49.660  
00:03:49.660  Trace options:
00:03:49.660       --num-trace-entries <num>   number of trace entries for each core, must be power of 2,
00:03:49.660                                   setting 0 to disable trace (default 32768)
00:03:49.660                                   Tracepoints vary in size and can use more than one trace entry.
00:03:49.660   -e, --tpoint-group <group-name>[:<tpoint_mask>]
00:03:49.660                             group_name - tracepoint group name for spdk trace buffers (thread, all).
00:03:49.660                             tpoint_mask - tracepoint mask for enabling individual tpoints inside
00:03:49.660                             a tracepoint group. First tpoint inside a group can be enabled by
00:03:49.660                             setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be
00:03:49.660                             combined (e.g. thread,bdev:0x1). All available tpoints can be found
00:03:49.660                             in /include/spdk_internal/trace_defs.h
00:03:49.660  
00:03:49.660  Other options:
00:03:49.660   -h, --help                show this usage
00:03:49.660   -v, --version             print SPDK version
00:03:49.660   -d, --limit-coredump      do not set max coredump size to RLIM_INFINITY
00:03:49.660       --env-context         Opaque context for use of the env implementation
00:03:49.660  [2024-12-09 10:18:41.782093] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1290:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments
00:03:49.660  passed
00:03:49.660  
00:03:49.660  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.660                suites      1      1    n/a      0        0
00:03:49.660                 tests      1      1      1      0        0
00:03:49.660               asserts      8      8      8      0      n/a
00:03:49.660  
00:03:49.660  Elapsed time =    0.000 seconds
00:03:49.660   10:18:41 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut
00:03:49.660  
00:03:49.660  
00:03:49.660       CUnit - A unit testing framework for C - Version 2.1-3
00:03:49.660       http://cunit.sourceforge.net/
00:03:49.660  
00:03:49.660  
00:03:49.660  Suite: app_suite
00:03:49.660    Test: test_create_reactor ...passed
00:03:49.660    Test: test_init_reactors ...passed
00:03:49.660    Test: test_event_call ...passed
00:03:49.660    Test: test_schedule_thread ...passed
00:03:49.660    Test: test_reschedule_thread ...passed
00:03:49.660    Test: test_bind_thread ...passed
00:03:49.660    Test: test_for_each_reactor ...passed
00:03:49.660    Test: test_reactor_stats ...passed
00:03:49.660    Test: test_scheduler ...passed
00:03:49.661    Test: test_scheduler_set_isolated_core_mask ...[2024-12-09 10:18:41.789077] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:03:49.661  passed
00:03:49.661    Test: test_mixed_workload ...[2024-12-09 10:18:41.789234] /home/vagrant/spdk_repo/spdk/lib/event/reactor.c: 187:scheduler_set_isolated_core_mask: *ERROR*: Isolated core mask is not included in app core mask.
00:03:49.661  passed
00:03:49.661  
00:03:49.661  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:49.661                suites      1      1    n/a      0        0
00:03:49.661                 tests     11     11     11      0        0
00:03:49.661               asserts    296    296    296      0      n/a
00:03:49.661  
00:03:49.661  Elapsed time =    0.000 seconds
00:03:49.661  
00:03:49.661  real	0m0.015s
00:03:49.661  user	0m0.014s
00:03:49.661  sys	0m0.004s
00:03:49.661   10:18:41 unittest.unittest_event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:49.661   10:18:41 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x
00:03:49.661  ************************************
00:03:49.661  END TEST unittest_event
00:03:49.661  ************************************
00:03:50.013    10:18:41 unittest -- unit/unittest.sh@217 -- # uname -s
00:03:50.013   10:18:41 unittest -- unit/unittest.sh@217 -- # '[' FreeBSD = Linux ']'
00:03:50.013   10:18:41 unittest -- unit/unittest.sh@221 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.013  ************************************
00:03:50.013  START TEST unittest_accel
00:03:50.013  ************************************
00:03:50.013   10:18:41 unittest.unittest_accel -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut
00:03:50.013  
00:03:50.013  
00:03:50.013       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.013       http://cunit.sourceforge.net/
00:03:50.013  
00:03:50.013  
00:03:50.013  Suite: accel_sequence
00:03:50.013    Test: test_sequence_fill_copy ...passed
00:03:50.013    Test: test_sequence_abort ...passed
00:03:50.013    Test: test_sequence_append_error ...passed
00:03:50.013    Test: test_sequence_completion_error ...passed
00:03:50.013    Test: test_sequence_decompress ...[2024-12-09 10:18:41.863764] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2385:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x3a91098cb9c0
00:03:50.013  [2024-12-09 10:18:41.863908] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2385:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x3a91098cb9c0
00:03:50.013  [2024-12-09 10:18:41.863918] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2298:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x3a91098cb9c0
00:03:50.013  [2024-12-09 10:18:41.863925] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2298:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x3a91098cb9c0
00:03:50.013  passed
00:03:50.013    Test: test_sequence_reverse ...passed
00:03:50.013    Test: test_sequence_copy_elision ...passed
00:03:50.013    Test: test_sequence_accel_buffers ...passed
00:03:50.013    Test: test_sequence_memory_domain ...passed
00:03:50.013    Test: test_sequence_module_memory_domain ...passed
00:03:50.013    Test: test_sequence_crypto ...[2024-12-09 10:18:41.864947] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2190:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7
00:03:50.013  [2024-12-09 10:18:41.864971] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2229:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48
00:03:50.013  passed
00:03:50.013    Test: test_sequence_driver ...passed
00:03:50.013    Test: test_sequence_same_iovs ...passed
00:03:50.013    Test: test_sequence_crc32 ...[2024-12-09 10:18:41.865515] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2337:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x3a91098cbc40 using driver: ut
00:03:50.013  [2024-12-09 10:18:41.865530] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:2399:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x3a91098cbc40 through driver: ut
00:03:50.013  passed
00:03:50.013    Test: test_sequence_dix_generate_verify ...passed
00:03:50.013    Test: test_sequence_dix ...passed
00:03:50.013  Suite: accel
00:03:50.013    Test: test_spdk_accel_task_complete ...passed
00:03:50.013    Test: test_get_task ...passed
00:03:50.013    Test: test_spdk_accel_submit_copy ...passed
00:03:50.013    Test: test_spdk_accel_submit_dualcast ...passed
00:03:50.013    Test: test_spdk_accel_submit_compare ...passed
00:03:50.013    Test: test_spdk_accel_submit_fill ...passed
00:03:50.013    Test: test_spdk_accel_submit_crc32c ...passed
00:03:50.013    Test: test_spdk_accel_submit_crc32cv ...passed
00:03:50.013    Test: test_spdk_accel_submit_copy_crc32c ...passed
00:03:50.013    Test: test_spdk_accel_submit_xor ...passed
00:03:50.013    Test: test_spdk_accel_module_find_by_name ...passed
00:03:50.013    Test: test_spdk_accel_module_register ...[2024-12-09 10:18:41.866332] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:03:50.013  [2024-12-09 10:18:41.866342] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 427:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses
00:03:50.013  passed
00:03:50.013  
00:03:50.013  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.013                suites      2      2    n/a      0        0
00:03:50.013                 tests     28     28     28      0        0
00:03:50.013               asserts    884    884    884      0      n/a
00:03:50.013  
00:03:50.013  Elapsed time =    0.008 seconds
00:03:50.013  
00:03:50.013  real	0m0.015s
00:03:50.013  user	0m0.013s
00:03:50.013  sys	0m0.005s
00:03:50.013   10:18:41 unittest.unittest_accel -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.013  ************************************
00:03:50.013  END TEST unittest_accel
00:03:50.013  ************************************
00:03:50.013   10:18:41 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x
00:03:50.013   10:18:41 unittest -- unit/unittest.sh@222 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.013   10:18:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.013  ************************************
00:03:50.013  START TEST unittest_ioat
00:03:50.013  ************************************
00:03:50.013   10:18:41 unittest.unittest_ioat -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut
00:03:50.013  
00:03:50.013  
00:03:50.013       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.013       http://cunit.sourceforge.net/
00:03:50.013  
00:03:50.013  
00:03:50.013  Suite: ioat
00:03:50.013    Test: ioat_state_check ...passed
00:03:50.013  
00:03:50.013  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.013                suites      1      1    n/a      0        0
00:03:50.013                 tests      1      1      1      0        0
00:03:50.013               asserts     32     32     32      0      n/a
00:03:50.013  
00:03:50.013  Elapsed time =    0.000 seconds
00:03:50.013  
00:03:50.013  real	0m0.005s
00:03:50.013  user	0m0.004s
00:03:50.014  sys	0m0.004s
00:03:50.014   10:18:41 unittest.unittest_ioat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.014  ************************************
00:03:50.014   10:18:41 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x
00:03:50.014  END TEST unittest_ioat
00:03:50.014  ************************************
00:03:50.014   10:18:41 unittest -- unit/unittest.sh@223 -- # [[ y == y ]]
00:03:50.014   10:18:41 unittest -- unit/unittest.sh@224 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:03:50.014   10:18:41 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.014   10:18:41 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.014   10:18:41 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.014  ************************************
00:03:50.014  START TEST unittest_idxd_user
00:03:50.014  ************************************
00:03:50.014   10:18:41 unittest.unittest_idxd_user -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut
00:03:50.014  
00:03:50.014  
00:03:50.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.014       http://cunit.sourceforge.net/
00:03:50.014  
00:03:50.014  
00:03:50.014  Suite: idxd_user
00:03:50.014    Test: test_idxd_wait_cmd ...[2024-12-09 10:18:41.984551] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:03:50.014  passed
00:03:50.014    Test: test_idxd_reset_dev ...passed
00:03:50.014    Test: test_idxd_group_config ...passed
00:03:50.014    Test: test_idxd_wq_config ...passed
00:03:50.014  
00:03:50.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.014                suites      1      1    n/a      0        0
00:03:50.014                 tests      4      4      4      0        0
00:03:50.014               asserts     20     20     20      0      n/a
00:03:50.014  
00:03:50.014  Elapsed time =    0.000 seconds
00:03:50.014  [2024-12-09 10:18:41.984763] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1
00:03:50.014  [2024-12-09 10:18:41.984788] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c:  52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1
00:03:50.014  [2024-12-09 10:18:41.984801] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274
00:03:50.014  
00:03:50.014  real	0m0.006s
00:03:50.014  user	0m0.005s
00:03:50.014  sys	0m0.004s
00:03:50.014   10:18:41 unittest.unittest_idxd_user -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.014  ************************************
00:03:50.014  END TEST unittest_idxd_user
00:03:50.014  ************************************
00:03:50.014   10:18:41 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x
00:03:50.014   10:18:42 unittest -- unit/unittest.sh@226 -- # run_test unittest_iscsi unittest_iscsi
00:03:50.014   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.014   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.014   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.014  ************************************
00:03:50.014  START TEST unittest_iscsi
00:03:50.014  ************************************
00:03:50.014   10:18:42 unittest.unittest_iscsi -- common/autotest_common.sh@1129 -- # unittest_iscsi
00:03:50.014   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut
00:03:50.014  
00:03:50.014  
00:03:50.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.014       http://cunit.sourceforge.net/
00:03:50.014  
00:03:50.014  
00:03:50.014  Suite: conn_suite
00:03:50.014    Test: read_task_split_in_order_case ...passed
00:03:50.014    Test: read_task_split_reverse_order_case ...passed
00:03:50.014    Test: propagate_scsi_error_status_for_split_read_tasks ...passed
00:03:50.014    Test: process_non_read_task_completion_test ...passed
00:03:50.014    Test: free_tasks_on_connection ...passed
00:03:50.014    Test: free_tasks_with_queued_datain ...passed
00:03:50.014    Test: abort_queued_datain_task_test ...passed
00:03:50.014    Test: abort_queued_datain_tasks_test ...passed
00:03:50.014  
00:03:50.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.014                suites      1      1    n/a      0        0
00:03:50.014                 tests      8      8      8      0        0
00:03:50.014               asserts    230    230    230      0      n/a
00:03:50.014  
00:03:50.014  Elapsed time =    0.000 seconds
00:03:50.014   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut
00:03:50.014  
00:03:50.014  
00:03:50.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.014       http://cunit.sourceforge.net/
00:03:50.014  
00:03:50.014  
00:03:50.014  Suite: iscsi_suite
00:03:50.014    Test: param_negotiation_test ...passed
00:03:50.014    Test: list_negotiation_test ...passed
00:03:50.014    Test: parse_valid_test ...passed
00:03:50.014    Test: parse_invalid_test ...[2024-12-09 10:18:42.047338] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:03:50.014  [2024-12-09 10:18:42.047517] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found
00:03:50.014  [2024-12-09 10:18:42.047533] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key
00:03:50.014  [2024-12-09 10:18:42.048612] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193
00:03:50.014  [2024-12-09 10:18:42.048654] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256
00:03:50.014  [2024-12-09 10:18:42.048668] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63
00:03:50.014  passed
00:03:50.014  
00:03:50.014  [2024-12-09 10:18:42.048681] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B
00:03:50.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.014                suites      1      1    n/a      0        0
00:03:50.014                 tests      4      4      4      0        0
00:03:50.014               asserts    161    161    161      0      n/a
00:03:50.014  
00:03:50.014  Elapsed time =    0.000 seconds
00:03:50.014   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut
00:03:50.014  
00:03:50.014  
00:03:50.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.014       http://cunit.sourceforge.net/
00:03:50.014  
00:03:50.014  
00:03:50.014  Suite: iscsi_target_node_suite
00:03:50.014    Test: add_lun_test_cases ...passed
00:03:50.014    Test: allow_any_allowed ...passed
00:03:50.014    Test: allow_ipv6_allowed ...passed
00:03:50.014    Test: allow_ipv6_denied ...passed
00:03:50.014    Test: allow_ipv6_invalid ...passed
00:03:50.014    Test: allow_ipv4_allowed ...passed
00:03:50.014    Test: allow_ipv4_denied ...passed
00:03:50.014    Test: allow_ipv4_invalid ...passed
00:03:50.014    Test: node_access_allowed ...passed
00:03:50.014    Test: node_access_denied_by_empty_netmask ...passed
00:03:50.014    Test: node_access_multi_initiator_groups_cases ...passed
00:03:50.014    Test: allow_iscsi_name_multi_maps_case ...passed
00:03:50.014    Test: chap_param_test_cases ...[2024-12-09 10:18:42.053223] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1)
00:03:50.014  [2024-12-09 10:18:42.053321] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative
00:03:50.014  [2024-12-09 10:18:42.053328] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:03:50.014  [2024-12-09 10:18:42.053333] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found
00:03:50.014  [2024-12-09 10:18:42.053339] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed
00:03:50.014  [2024-12-09 10:18:42.053388] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0)
00:03:50.014  [2024-12-09 10:18:42.053396] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1)
00:03:50.014  passed
00:03:50.014  
00:03:50.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.014                suites      1      1    n/a      0        0
00:03:50.014                 tests     13     13     13      0        0
00:03:50.014               asserts     50     50     50      0      n/a
00:03:50.014  
00:03:50.014  Elapsed time =    0.000 seconds
00:03:50.014  [2024-12-09 10:18:42.053402] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1)
00:03:50.014  [2024-12-09 10:18:42.053407] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1)
00:03:50.014  [2024-12-09 10:18:42.053412] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1)
00:03:50.014   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut
00:03:50.014  
00:03:50.014  
00:03:50.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.014       http://cunit.sourceforge.net/
00:03:50.014  
00:03:50.014  
00:03:50.014  Suite: iscsi_suite
00:03:50.014    Test: op_login_check_target_test ...[2024-12-09 10:18:42.060487] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied
00:03:50.014  passed
00:03:50.014    Test: op_login_session_normal_test ...[2024-12-09 10:18:42.060695] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:03:50.014  [2024-12-09 10:18:42.060715] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:03:50.014  [2024-12-09 10:18:42.060729] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty
00:03:50.014  [2024-12-09 10:18:42.060757] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed
00:03:50.014  passed
00:03:50.014    Test: maxburstlength_test ...[2024-12-09 10:18:42.060772] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:03:50.014  [2024-12-09 10:18:42.060797] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0
00:03:50.014  [2024-12-09 10:18:42.060809] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed
00:03:50.014  passed
00:03:50.014    Test: underflow_for_read_transfer_test ...passed
00:03:50.014    Test: underflow_for_zero_read_transfer_test ...passed
00:03:50.014    Test: underflow_for_request_sense_test ...passed
00:03:50.014    Test: underflow_for_check_condition_test ...passed
00:03:50.014    Test: add_transfer_task_test ...passed
00:03:50.014    Test: get_transfer_task_test ...passed
00:03:50.014    Test: del_transfer_task_test ...passed
00:03:50.014    Test: clear_all_transfer_tasks_test ...passed
00:03:50.014    Test: build_iovs_test ...passed
00:03:50.014    Test: build_iovs_with_md_test ...passed
00:03:50.015    Test: pdu_hdr_op_login_test ...passed
00:03:50.015    Test: pdu_hdr_op_text_test ...passed
00:03:50.015    Test: pdu_hdr_op_logout_test ...passed
00:03:50.015    Test: pdu_hdr_op_scsi_test ...passed
00:03:50.015    Test: pdu_hdr_op_task_mgmt_test ...passed
00:03:50.015    Test: pdu_hdr_op_nopout_test ...passed
00:03:50.015    Test: pdu_hdr_op_data_test ...passed
00:03:50.015    Test: empty_text_with_cbit_test ...passed
00:03:50.015    Test: pdu_payload_read_test ...[2024-12-09 10:18:42.060856] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:03:50.015  [2024-12-09 10:18:42.060870] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL)
00:03:50.015  [2024-12-09 10:18:42.061029] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error
00:03:50.015  [2024-12-09 10:18:42.061045] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0
00:03:50.015  [2024-12-09 10:18:42.061058] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2
00:03:50.015  [2024-12-09 10:18:42.061074] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68)
00:03:50.015  [2024-12-09 10:18:42.061087] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue
00:03:50.015  [2024-12-09 10:18:42.061100] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678...
00:03:50.015  [2024-12-09 10:18:42.061115] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason.
00:03:50.015  [2024-12-09 10:18:42.061134] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:03:50.015  [2024-12-09 10:18:42.061145] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session
00:03:50.015  [2024-12-09 10:18:42.061157] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported
00:03:50.015  [2024-12-09 10:18:42.061172] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68)
00:03:50.015  [2024-12-09 10:18:42.061185] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67)
00:03:50.015  [2024-12-09 10:18:42.061200] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0
00:03:50.015  [2024-12-09 10:18:42.061215] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session
00:03:50.015  [2024-12-09 10:18:42.061229] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0
00:03:50.015  [2024-12-09 10:18:42.061247] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session
00:03:50.015  [2024-12-09 10:18:42.061260] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:03:50.015  [2024-12-09 10:18:42.061271] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3
00:03:50.015  [2024-12-09 10:18:42.061283] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0
00:03:50.015  [2024-12-09 10:18:42.061297] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session
00:03:50.015  [2024-12-09 10:18:42.061309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0
00:03:50.015  [2024-12-09 10:18:42.061322] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU
00:03:50.015  [2024-12-09 10:18:42.061334] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1
00:03:50.015  [2024-12-09 10:18:42.061347] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error
00:03:50.015  [2024-12-09 10:18:42.061359] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error
00:03:50.015  [2024-12-09 10:18:42.061371] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535)
00:03:50.015  passed
00:03:50.015    Test: data_out_pdu_sequence_test ...[2024-12-09 10:18:42.061762] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536)
00:03:50.015  passed
00:03:50.015    Test: immediate_data_and_data_out_pdu_sequence_test ...passed
00:03:50.015  
00:03:50.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.015                suites      1      1    n/a      0        0
00:03:50.015                 tests     24     24     24      0        0
00:03:50.015               asserts 150253 150253 150253      0      n/a
00:03:50.015  
00:03:50.015  Elapsed time =    0.000 seconds
00:03:50.015   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut
00:03:50.015  
00:03:50.015  
00:03:50.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.015       http://cunit.sourceforge.net/
00:03:50.015  
00:03:50.015  
00:03:50.015  Suite: init_grp_suite
00:03:50.015    Test: create_initiator_group_success_case ...passed
00:03:50.015    Test: find_initiator_group_success_case ...passed
00:03:50.015    Test: register_initiator_group_twice_case ...passed
00:03:50.015    Test: add_initiator_name_success_case ...passed
00:03:50.015    Test: add_initiator_name_fail_case ...[2024-12-09 10:18:42.069107] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c:  54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed
00:03:50.015  passed
00:03:50.015    Test: delete_all_initiator_names_success_case ...passed
00:03:50.015    Test: add_netmask_success_case ...passed
00:03:50.015    Test: add_netmask_fail_case ...passed
00:03:50.015    Test: delete_all_netmasks_success_case ...passed
00:03:50.015    Test: initiator_name_overwrite_all_to_any_case ...passed
00:03:50.015    Test: netmask_overwrite_all_to_any_case ...passed
00:03:50.015    Test: add_delete_initiator_names_case ...passed
00:03:50.015    Test: add_duplicated_initiator_names_case ...passed
00:03:50.015    Test: delete_nonexisting_initiator_names_case ...passed
00:03:50.015    Test: add_delete_netmasks_case ...passed
00:03:50.015    Test: add_duplicated_netmasks_case ...passed
00:03:50.015    Test: delete_nonexisting_netmasks_case ...passed
00:03:50.015  
00:03:50.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.015                suites      1      1    n/a      0        0
00:03:50.015                 tests     17     17     17      0        0
00:03:50.015               asserts    108    108    108      0      n/a
00:03:50.015  
00:03:50.015  Elapsed time =    0.000 seconds
00:03:50.015  [2024-12-09 10:18:42.069302] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed
00:03:50.015   10:18:42 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut
00:03:50.015  
00:03:50.015  
00:03:50.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.015       http://cunit.sourceforge.net/
00:03:50.015  
00:03:50.015  
00:03:50.015  Suite: portal_grp_suite
00:03:50.015    Test: portal_create_ipv4_normal_case ...passed
00:03:50.015    Test: portal_create_ipv6_normal_case ...passed
00:03:50.015    Test: portal_create_ipv4_wildcard_case ...passed
00:03:50.015    Test: portal_create_ipv6_wildcard_case ...passed
00:03:50.015    Test: portal_create_twice_case ...passed
00:03:50.015    Test: portal_grp_register_unregister_case ...passed
00:03:50.015    Test: portal_grp_register_twice_case ...passed
00:03:50.015    Test: portal_grp_add_delete_case ...[2024-12-09 10:18:42.073387] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists
00:03:50.015  passed
00:03:50.015    Test: portal_grp_add_delete_twice_case ...passed
00:03:50.015  
00:03:50.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.015                suites      1      1    n/a      0        0
00:03:50.015                 tests      9      9      9      0        0
00:03:50.015               asserts     44     44     44      0      n/a
00:03:50.015  
00:03:50.015  Elapsed time =    0.000 seconds
00:03:50.015  
00:03:50.015  real	0m0.038s
00:03:50.015  user	0m0.011s
00:03:50.015  sys	0m0.026s
00:03:50.015   10:18:42 unittest.unittest_iscsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.015   10:18:42 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x
00:03:50.015  ************************************
00:03:50.015  END TEST unittest_iscsi
00:03:50.015  ************************************
00:03:50.015   10:18:42 unittest -- unit/unittest.sh@227 -- # run_test unittest_json unittest_json
00:03:50.015   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.015   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.015   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.015  ************************************
00:03:50.015  START TEST unittest_json
00:03:50.015  ************************************
00:03:50.015   10:18:42 unittest.unittest_json -- common/autotest_common.sh@1129 -- # unittest_json
00:03:50.015   10:18:42 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut
00:03:50.015  
00:03:50.015  
00:03:50.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.015       http://cunit.sourceforge.net/
00:03:50.015  
00:03:50.015  
00:03:50.015  Suite: json
00:03:50.015    Test: test_parse_literal ...passed
00:03:50.015    Test: test_parse_string_simple ...passed
00:03:50.015    Test: test_parse_string_control_chars ...passed
00:03:50.015    Test: test_parse_string_utf8 ...passed
00:03:50.015    Test: test_parse_string_escapes_twochar ...passed
00:03:50.015    Test: test_parse_string_escapes_unicode ...passed
00:03:50.015    Test: test_parse_number ...passed
00:03:50.015    Test: test_parse_array ...passed
00:03:50.015    Test: test_parse_object ...passed
00:03:50.015    Test: test_parse_nesting ...passed
00:03:50.015    Test: test_parse_comment ...passed
00:03:50.015  
00:03:50.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.015                suites      1      1    n/a      0        0
00:03:50.015                 tests     11     11     11      0        0
00:03:50.015               asserts   1516   1516   1516      0      n/a
00:03:50.015  
00:03:50.015  Elapsed time =    0.000 seconds
00:03:50.015   10:18:42 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut
00:03:50.015  
00:03:50.015  
00:03:50.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.015       http://cunit.sourceforge.net/
00:03:50.015  
00:03:50.015  
00:03:50.016  Suite: json
00:03:50.016    Test: test_strequal ...passed
00:03:50.016    Test: test_num_to_uint16 ...passed
00:03:50.016    Test: test_num_to_int32 ...passed
00:03:50.016    Test: test_num_to_uint64 ...passed
00:03:50.016    Test: test_decode_object ...passed
00:03:50.016    Test: test_decode_array ...passed
00:03:50.016    Test: test_decode_bool ...passed
00:03:50.016    Test: test_decode_uint16 ...passed
00:03:50.016    Test: test_decode_int32 ...passed
00:03:50.016    Test: test_decode_uint32 ...passed
00:03:50.016    Test: test_decode_uint64 ...passed
00:03:50.016    Test: test_decode_string ...passed
00:03:50.016    Test: test_decode_uuid ...passed
00:03:50.016    Test: test_find ...passed
00:03:50.016    Test: test_find_array ...passed
00:03:50.016    Test: test_iterating ...passed
00:03:50.016    Test: test_free_object ...passed
00:03:50.016  
00:03:50.016  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.016                suites      1      1    n/a      0        0
00:03:50.016                 tests     17     17     17      0        0
00:03:50.016               asserts    236    236    236      0      n/a
00:03:50.016  
00:03:50.016  Elapsed time =    0.000 seconds
00:03:50.016   10:18:42 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut
00:03:50.016  
00:03:50.016  
00:03:50.016       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.016       http://cunit.sourceforge.net/
00:03:50.016  
00:03:50.016  
00:03:50.016  Suite: json
00:03:50.016    Test: test_write_literal ...passed
00:03:50.016    Test: test_write_string_simple ...passed
00:03:50.016    Test: test_write_string_escapes ...passed
00:03:50.016    Test: test_write_string_utf16le ...passed
00:03:50.016    Test: test_write_number_int32 ...passed
00:03:50.016    Test: test_write_number_uint32 ...passed
00:03:50.016    Test: test_write_number_uint128 ...passed
00:03:50.016    Test: test_write_string_number_uint128 ...passed
00:03:50.016    Test: test_write_number_int64 ...passed
00:03:50.016    Test: test_write_number_uint64 ...passed
00:03:50.016    Test: test_write_number_double ...passed
00:03:50.016    Test: test_write_uuid ...passed
00:03:50.016    Test: test_write_array ...passed
00:03:50.016    Test: test_write_object ...passed
00:03:50.016    Test: test_write_nesting ...passed
00:03:50.016    Test: test_write_val ...passed
00:03:50.016  
00:03:50.016  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.016                suites      1      1    n/a      0        0
00:03:50.016                 tests     16     16     16      0        0
00:03:50.016               asserts    918    918    918      0      n/a
00:03:50.016  
00:03:50.016  Elapsed time =    0.000 seconds
00:03:50.016   10:18:42 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut
00:03:50.016  
00:03:50.016  
00:03:50.016       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.016       http://cunit.sourceforge.net/
00:03:50.016  
00:03:50.016  
00:03:50.016  Suite: jsonrpc
00:03:50.016    Test: test_parse_request ...passed
00:03:50.016    Test: test_parse_request_streaming ...passed
00:03:50.016  
00:03:50.016  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.016                suites      1      1    n/a      0        0
00:03:50.016                 tests      2      2      2      0        0
00:03:50.016               asserts    289    289    289      0      n/a
00:03:50.016  
00:03:50.016  Elapsed time =    0.000 seconds
00:03:50.016  
00:03:50.016  real	0m0.024s
00:03:50.016  user	0m0.015s
00:03:50.016  sys	0m0.009s
00:03:50.016   10:18:42 unittest.unittest_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.016   10:18:42 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x
00:03:50.016  ************************************
00:03:50.016  END TEST unittest_json
00:03:50.016  ************************************
00:03:50.278   10:18:42 unittest -- unit/unittest.sh@228 -- # run_test unittest_rpc unittest_rpc
00:03:50.278   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.278   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.278   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.278  ************************************
00:03:50.278  START TEST unittest_rpc
00:03:50.278  ************************************
00:03:50.278   10:18:42 unittest.unittest_rpc -- common/autotest_common.sh@1129 -- # unittest_rpc
00:03:50.278   10:18:42 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut
00:03:50.278  
00:03:50.278  
00:03:50.278       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.278       http://cunit.sourceforge.net/
00:03:50.278  
00:03:50.278  
00:03:50.278  Suite: rpc
00:03:50.278    Test: test_jsonrpc_handler ...passed
00:03:50.278    Test: test_spdk_rpc_is_method_allowed ...passed
00:03:50.279    Test: test_rpc_get_methods ...[2024-12-09 10:18:42.191979] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed
00:03:50.279  passed
00:03:50.279    Test: test_rpc_spdk_get_version ...passed
00:03:50.279    Test: test_spdk_rpc_listen_close ...passed
00:03:50.279    Test: test_rpc_run_multiple_servers ...passed
00:03:50.279  
00:03:50.279  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.279                suites      1      1    n/a      0        0
00:03:50.279                 tests      6      6      6      0        0
00:03:50.279               asserts     23     23     23      0      n/a
00:03:50.279  
00:03:50.279  Elapsed time =    0.000 seconds
00:03:50.279  
00:03:50.279  real	0m0.006s
00:03:50.279  user	0m0.006s
00:03:50.279  sys	0m0.000s
00:03:50.279   10:18:42 unittest.unittest_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.279   10:18:42 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x
00:03:50.279  ************************************
00:03:50.279  END TEST unittest_rpc
00:03:50.279  ************************************
00:03:50.279   10:18:42 unittest -- unit/unittest.sh@229 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.279  ************************************
00:03:50.279  START TEST unittest_notify
00:03:50.279  ************************************
00:03:50.279   10:18:42 unittest.unittest_notify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut
00:03:50.279  
00:03:50.279  
00:03:50.279       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.279       http://cunit.sourceforge.net/
00:03:50.279  
00:03:50.279  
00:03:50.279  Suite: app_suite
00:03:50.279    Test: notify ...passed
00:03:50.279  
00:03:50.279  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.279                suites      1      1    n/a      0        0
00:03:50.279                 tests      1      1      1      0        0
00:03:50.279               asserts     13     13     13      0      n/a
00:03:50.279  
00:03:50.279  Elapsed time =    0.000 seconds
00:03:50.279  
00:03:50.279  real	0m0.005s
00:03:50.279  user	0m0.004s
00:03:50.279  sys	0m0.005s
00:03:50.279   10:18:42 unittest.unittest_notify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.279  ************************************
00:03:50.279  END TEST unittest_notify
00:03:50.279  ************************************
00:03:50.279   10:18:42 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x
00:03:50.279   10:18:42 unittest -- unit/unittest.sh@230 -- # run_test unittest_nvme unittest_nvme
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.279   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.279  ************************************
00:03:50.279  START TEST unittest_nvme
00:03:50.279  ************************************
00:03:50.279   10:18:42 unittest.unittest_nvme -- common/autotest_common.sh@1129 -- # unittest_nvme
00:03:50.279   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut
00:03:50.279  
00:03:50.279  
00:03:50.279       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.279       http://cunit.sourceforge.net/
00:03:50.279  
00:03:50.279  
00:03:50.279  Suite: nvme
00:03:50.279    Test: test_opc_data_transfer ...passed
00:03:50.279    Test: test_spdk_nvme_transport_id_parse_trtype ...passed
00:03:50.279    Test: test_spdk_nvme_transport_id_parse_adrfam ...passed
00:03:50.279    Test: test_trid_parse_and_compare ...[2024-12-09 10:18:42.302050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1237:parse_next_key: *ERROR*: Key without ':' or '=' separator
00:03:50.279  [2024-12-09 10:18:42.302265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:03:50.279  [2024-12-09 10:18:42.302285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1250:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31
00:03:50.279  [2024-12-09 10:18:42.302297] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:03:50.279  [2024-12-09 10:18:42.302311] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1260:parse_next_key: *ERROR*: Key without value
00:03:50.279  [2024-12-09 10:18:42.302322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1294:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID
00:03:50.279  passed
00:03:50.279    Test: test_trid_trtype_str ...passed
00:03:50.279    Test: test_trid_adrfam_str ...passed
00:03:50.279    Test: test_nvme_ctrlr_probe ...passed
00:03:50.279    Test: test_spdk_nvme_probe_ext ...passed
00:03:50.279    Test: test_spdk_nvme_connect ...passed
00:03:50.279    Test: test_nvme_ctrlr_probe_internal ...passed
00:03:50.279    Test: test_nvme_init_controllers ...passed
00:03:50.279    Test: test_nvme_driver_init ...[2024-12-09 10:18:42.302444] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 663:nvme_ctrlr_probe: *ERROR*: NVMe controller for SSD:  is being destructed
00:03:50.279  [2024-12-09 10:18:42.302465] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:03:50.279  [2024-12-09 10:18:42.302484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:03:50.279  [2024-12-09 10:18:42.302499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:03:50.279  [2024-12-09 10:18:42.302515] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 834:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available
00:03:50.279  [2024-12-09 10:18:42.302527] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed
00:03:50.279  [2024-12-09 10:18:42.302551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1048:spdk_nvme_connect: *ERROR*: No transport ID specified
00:03:50.279  [2024-12-09 10:18:42.302643] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:03:50.279  [2024-12-09 10:18:42.302670] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 
00:03:50.279  [2024-12-09 10:18:42.302682] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed
00:03:50.279  [2024-12-09 10:18:42.302719] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 
00:03:50.279  [2024-12-09 10:18:42.302753] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 576:nvme_driver_init: *ERROR*: primary process failed to reserve memory
00:03:50.279  [2024-12-09 10:18:42.302766] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 599:nvme_driver_init: *ERROR*: primary process is not started yet
00:03:50.279  passed
00:03:50.279    Test: test_spdk_nvme_detach ...passed
00:03:50.279    Test: test_nvme_completion_poll_cb ...passed
00:03:50.279    Test: test_nvme_user_copy_cmd_complete ...passed
00:03:50.279    Test: test_nvme_allocate_request_null ...passed
00:03:50.279    Test: test_nvme_allocate_request ...passed
00:03:50.279    Test: test_nvme_free_request ...passed
00:03:50.279    Test: test_nvme_allocate_request_user_copy ...passed
00:03:50.279    Test: test_nvme_robust_mutex_init_shared ...passed
00:03:50.279    Test: test_nvme_request_check_timeout ...passed
00:03:50.279    Test: test_nvme_wait_for_completion ...passed
00:03:50.279    Test: test_spdk_nvme_parse_func ...[2024-12-09 10:18:42.416646] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 594:nvme_driver_init: *ERROR*: timeout waiting for primary process to init
00:03:50.279  passed
00:03:50.279    Test: test_spdk_nvme_detach_async ...passed
00:03:50.279    Test: test_nvme_parse_addr ...passed
00:03:50.279  
00:03:50.279  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.279                suites      1      1    n/a      0        0
00:03:50.279                 tests     25     25     25      0        0
00:03:50.279               asserts    332    332    332      0      n/a
00:03:50.279  
00:03:50.279  Elapsed time =    0.000 seconds
00:03:50.279  [2024-12-09 10:18:42.416949] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1711:nvme_parse_addr: *ERROR*: getaddrinfo failed: Name does not resolve (8)
00:03:50.279   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut
00:03:50.279  
00:03:50.279  
00:03:50.279       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.279       http://cunit.sourceforge.net/
00:03:50.279  
00:03:50.279  
00:03:50.279  Suite: nvme_ctrlr
00:03:50.279    Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-12-09 10:18:42.425286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.279  passed
00:03:50.279    Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-12-09 10:18:42.426854] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.279  passed
00:03:50.279    Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-12-09 10:18:42.428151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.279  passed
00:03:50.279    Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-12-09 10:18:42.429501] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.279  passed
00:03:50.279    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-12-09 10:18:42.430836] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.432010] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-09 10:18:42.433268] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-09 10:18:42.434484] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:03:50.542    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-12-09 10:18:42.437038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.439383] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-09 10:18:42.440539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:03:50.542    Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-12-09 10:18:42.443075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.444322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22[2024-12-09 10:18:42.446710] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4108:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr enable failed with error: -22passed
00:03:50.542    Test: test_nvme_ctrlr_init_delay ...[2024-12-09 10:18:42.449160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_alloc_io_qpair_rr_1 ...passed
00:03:50.542    Test: test_ctrlr_get_default_ctrlr_opts ...passed
00:03:50.542    Test: test_ctrlr_get_default_io_qpair_opts ...passed
00:03:50.542    Test: test_alloc_io_qpair_wrr_1 ...passed
00:03:50.542    Test: test_alloc_io_qpair_wrr_2 ...[2024-12-09 10:18:42.450370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.450419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:03:50.542  [2024-12-09 10:18:42.450432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:03:50.542  [2024-12-09 10:18:42.450441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:03:50.542  [2024-12-09 10:18:42.450449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 381:nvme_ctrlr_create_io_qpair: *ERROR*: [, 0] invalid queue priority for default round robin arbitration method
00:03:50.542  [2024-12-09 10:18:42.450485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.450503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_spdk_nvme_ctrlr_update_firmware ...passed
00:03:50.542    Test: test_nvme_ctrlr_fail ...passed
00:03:50.542    Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed
00:03:50.542    Test: test_nvme_ctrlr_set_supported_features ...passed
00:03:50.542    Test: test_nvme_ctrlr_set_host_feature ...[2024-12-09 10:18:42.450514] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [, 0] No free I/O queue IDs
00:03:50.542  [2024-12-09 10:18:42.450533] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5051:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_update_firmware invalid size!
00:03:50.542  [2024-12-09 10:18:42.450543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:03:50.542  [2024-12-09 10:18:42.450552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5128:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] nvme_ctrlr_cmd_fw_commit failed!
00:03:50.542  [2024-12-09 10:18:42.450560] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5088:spdk_nvme_ctrlr_update_firmware: *ERROR*: [, 0] spdk_nvme_ctrlr_fw_image_download failed!
00:03:50.542  [2024-12-09 10:18:42.450571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [, 0] in failed state.
00:03:50.542  [2024-12-09 10:18:42.450589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed
00:03:50.542    Test: test_nvme_ctrlr_test_active_ns ...[2024-12-09 10:18:42.451872] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_test_active_ns_error_case ...passed
00:03:50.542    Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed
00:03:50.542    Test: test_spdk_nvme_ctrlr_set_trid ...passed
00:03:50.542    Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-12-09 10:18:42.484411] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_init_set_num_queues ...[2024-12-09 10:18:42.491250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-12-09 10:18:42.492434] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.492470] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3040:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [, 0] Keep alive timeout Get Feature failed: SC 6 SCT 0
00:03:50.542  passed
00:03:50.542    Test: test_alloc_io_qpair_fail ...passed
00:03:50.542    Test: test_nvme_ctrlr_add_remove_process ...passed
00:03:50.542    Test: test_nvme_ctrlr_set_arbitration_feature ...passed
00:03:50.542    Test: test_nvme_ctrlr_set_state ...passed
00:03:50.542    Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-12-09 10:18:42.493623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  [2024-12-09 10:18:42.493660] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 505:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [, 0] nvme_transport_ctrlr_connect_io_qpair() failed
00:03:50.542  [2024-12-09 10:18:42.493689] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1555:_nvme_ctrlr_set_state: *ERROR*: [, 0] Specified timeout would cause integer overflow. Defaulting to no timeout.
00:03:50.542  [2024-12-09 10:18:42.493698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-12-09 10:18:42.496381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_ns_mgmt ...[2024-12-09 10:18:42.502900] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.542  passed
00:03:50.542    Test: test_nvme_ctrlr_reset ...passed
00:03:50.542    Test: test_nvme_ctrlr_aer_callback ...[2024-12-09 10:18:42.504151] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  [2024-12-09 10:18:42.504241] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  passed
00:03:50.543    Test: test_nvme_ctrlr_ns_attr_changed ...[2024-12-09 10:18:42.505424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  passed
00:03:50.543    Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed
00:03:50.543    Test: test_nvme_ctrlr_set_supported_log_pages ...passed
00:03:50.543    Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-12-09 10:18:42.506747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  passed
00:03:50.543    Test: test_nvme_ctrlr_parse_ana_log_page ...passed
00:03:50.543    Test: test_nvme_ctrlr_ana_resize ...[2024-12-09 10:18:42.507974] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  passed
00:03:50.543    Test: test_nvme_ctrlr_get_memory_domains ...passed
00:03:50.543    Test: test_nvme_transport_ctrlr_ready ...passed
00:03:50.543    Test: test_nvme_ctrlr_disable ...[2024-12-09 10:18:42.509219] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4194:nvme_ctrlr_process_init: *ERROR*: [, 0] Transport controller ready step failed: rc -1
00:03:50.543  [2024-12-09 10:18:42.509258] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4247:nvme_ctrlr_process_init: *ERROR*: [, 0] Ctrlr operation failed with error: -1, ctrlr state: 53 (error)
00:03:50.543  [2024-12-09 10:18:42.509271] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4316:nvme_ctrlr_construct: *ERROR*: [, 0] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value
00:03:50.543  passed
00:03:50.543    Test: test_nvme_numa_id ...passed
00:03:50.543  
00:03:50.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.543                suites      1      1    n/a      0        0
00:03:50.543                 tests     45     45     45      0        0
00:03:50.543               asserts  10448  10448  10448      0      n/a
00:03:50.543  
00:03:50.543  Elapsed time =    0.039 seconds
00:03:50.543   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut
00:03:50.543  
00:03:50.543  
00:03:50.543       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.543       http://cunit.sourceforge.net/
00:03:50.543  
00:03:50.543  
00:03:50.543  Suite: nvme_ctrlr_cmd
00:03:50.543    Test: test_get_log_pages ...passed
00:03:50.543    Test: test_set_feature_cmd ...passed
00:03:50.543    Test: test_set_feature_ns_cmd ...passed
00:03:50.543    Test: test_get_feature_cmd ...passed
00:03:50.543    Test: test_get_feature_ns_cmd ...passed
00:03:50.543    Test: test_abort_cmd ...passed
00:03:50.543    Test: test_set_host_id_cmds ...passed
00:03:50.543    Test: test_io_cmd_raw_no_payload_build ...passed
00:03:50.543    Test: test_io_raw_cmd ...passed
00:03:50.543    Test: test_io_raw_cmd_with_md ...passed
00:03:50.543    Test: test_namespace_attach ...passed
00:03:50.543    Test: test_namespace_detach ...passed
00:03:50.543    Test: test_namespace_create ...passed
00:03:50.543    Test: test_namespace_delete ...passed
00:03:50.543    Test: test_doorbell_buffer_config ...passed
00:03:50.543    Test: test_format_nvme ...passed
00:03:50.543    Test: test_fw_commit ...passed
00:03:50.543    Test: test_fw_image_download ...passed
00:03:50.543    Test: test_sanitize ...passed
00:03:50.543    Test: test_directive ...passed
00:03:50.543    Test: test_nvme_request_add_abort ...passed
00:03:50.543    Test: test_spdk_nvme_ctrlr_cmd_abort ...passed
00:03:50.543    Test: test_nvme_ctrlr_cmd_identify ...passed
00:03:50.543    Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed
00:03:50.543  
00:03:50.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.543                suites      1      1    n/a      0        0
00:03:50.543                 tests     24     24     24      0        0
00:03:50.543               asserts    198    198    198      0      n/a
00:03:50.543  
00:03:50.543  Elapsed time =    0.000 seconds
00:03:50.543  [2024-12-09 10:18:42.517421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024
00:03:50.543   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut
00:03:50.543  
00:03:50.543  
00:03:50.543       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.543       http://cunit.sourceforge.net/
00:03:50.543  
00:03:50.543  
00:03:50.543  Suite: nvme_ctrlr_cmd
00:03:50.543    Test: test_geometry_cmd ...passed
00:03:50.543    Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed
00:03:50.543  
00:03:50.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.543                suites      1      1    n/a      0        0
00:03:50.543                 tests      2      2      2      0        0
00:03:50.543               asserts      7      7      7      0      n/a
00:03:50.543  
00:03:50.543  Elapsed time =    0.000 seconds
00:03:50.543   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut
00:03:50.543  
00:03:50.543  
00:03:50.543       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.543       http://cunit.sourceforge.net/
00:03:50.543  
00:03:50.543  
00:03:50.543  Suite: nvme
00:03:50.543    Test: test_nvme_ns_construct ...passed
00:03:50.543    Test: test_nvme_ns_uuid ...passed
00:03:50.543    Test: test_nvme_ns_csi ...passed
00:03:50.543    Test: test_nvme_ns_data ...passed
00:03:50.543    Test: test_nvme_ns_set_identify_data ...passed
00:03:50.543    Test: test_spdk_nvme_ns_get_values ...passed
00:03:50.543    Test: test_spdk_nvme_ns_is_active ...passed
00:03:50.543    Test: spdk_nvme_ns_supports ...passed
00:03:50.543    Test: test_nvme_ns_has_supported_iocs_specific_data ...passed
00:03:50.543    Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed
00:03:50.543    Test: test_nvme_ctrlr_identify_id_desc ...passed
00:03:50.543    Test: test_nvme_ns_find_id_desc ...passed
00:03:50.543  
00:03:50.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.543                suites      1      1    n/a      0        0
00:03:50.543                 tests     12     12     12      0        0
00:03:50.543               asserts     95     95     95      0      n/a
00:03:50.543  
00:03:50.543  Elapsed time =    0.000 seconds
00:03:50.543   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut
00:03:50.543  
00:03:50.543  
00:03:50.543       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.543       http://cunit.sourceforge.net/
00:03:50.543  
00:03:50.543  
00:03:50.543  Suite: nvme_ns_cmd
00:03:50.543    Test: split_test ...passed
00:03:50.543    Test: split_test2 ...passed
00:03:50.543    Test: split_test3 ...passed
00:03:50.543    Test: split_test4 ...passed
00:03:50.543    Test: test_nvme_ns_cmd_flush ...passed
00:03:50.543    Test: test_nvme_ns_cmd_dataset_management ...passed
00:03:50.543    Test: test_nvme_ns_cmd_copy ...passed
00:03:50.543    Test: test_io_flags ...passed
00:03:50.543    Test: test_nvme_ns_cmd_write_zeroes ...passed
00:03:50.543    Test: test_nvme_ns_cmd_write_uncorrectable ...passed
00:03:50.543    Test: test_nvme_ns_cmd_reservation_register ...passed
00:03:50.543    Test: test_nvme_ns_cmd_reservation_release ...passed
00:03:50.543    Test: test_nvme_ns_cmd_reservation_acquire ...passed
00:03:50.543    Test: test_nvme_ns_cmd_reservation_report ...passed[2024-12-09 10:18:42.532190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc
00:03:50.543  
00:03:50.543    Test: test_cmd_child_request ...passed
00:03:50.543    Test: test_nvme_ns_cmd_readv ...passed
00:03:50.543    Test: test_nvme_ns_cmd_readv_sgl ...passed
00:03:50.543    Test: test_nvme_ns_cmd_read_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_writev ...[2024-12-09 10:18:42.533086] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 391:_nvme_ns_cmd_split_request_sgl: *ERROR*: Unable to send I/O. Would require more than the supported number of SGL Elements.[2024-12-09 10:18:42.533167] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512
00:03:50.543  passed
00:03:50.543    Test: test_nvme_ns_cmd_write_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_zone_append_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_comparev ...passed
00:03:50.543    Test: test_nvme_ns_cmd_compare_and_write ...passed
00:03:50.543    Test: test_nvme_ns_cmd_compare_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_comparev_with_md ...passed
00:03:50.543    Test: test_nvme_ns_cmd_setup_request ...passed
00:03:50.543    Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed
00:03:50.543    Test: test_spdk_nvme_ns_cmd_writev_ext ...passed
00:03:50.543    Test: test_spdk_nvme_ns_cmd_readv_ext ...passed
00:03:50.543    Test: test_nvme_ns_cmd_verify ...passed
00:03:50.543    Test: test_nvme_ns_cmd_io_mgmt_send ...passed
00:03:50.543    Test: test_nvme_ns_cmd_io_mgmt_recv ...passed
00:03:50.543  
00:03:50.543  [2024-12-09 10:18:42.533359] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:03:50.543  [2024-12-09 10:18:42.533386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f
00:03:50.543  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.543                suites      1      1    n/a      0        0
00:03:50.543                 tests     33     33     33      0        0
00:03:50.543               asserts    569    569    569      0      n/a
00:03:50.543  
00:03:50.543  Elapsed time =    0.000 seconds
00:03:50.543   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut
00:03:50.543  
00:03:50.543  
00:03:50.543       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.543       http://cunit.sourceforge.net/
00:03:50.543  
00:03:50.543  
00:03:50.543  Suite: nvme_ns_cmd
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_read ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_write ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed
00:03:50.543    Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed
00:03:50.544    Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed
00:03:50.544  
00:03:50.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.544                suites      1      1    n/a      0        0
00:03:50.544                 tests     12     12     12      0        0
00:03:50.544               asserts    123    123    123      0      n/a
00:03:50.544  
00:03:50.544  Elapsed time =    0.000 seconds
00:03:50.544   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut
00:03:50.544  
00:03:50.544  
00:03:50.544       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.544       http://cunit.sourceforge.net/
00:03:50.544  
00:03:50.544  
00:03:50.544  Suite: nvme_qpair
00:03:50.544    Test: test3 ...passed
00:03:50.544    Test: test_ctrlr_failed ...passed
00:03:50.544    Test: struct_packing ...passed
00:03:50.544    Test: test_nvme_qpair_process_completions ...passed
00:03:50.544    Test: test_nvme_completion_is_retry ...passed
00:03:50.544    Test: test_get_status_string ...passed
00:03:50.544    Test: test_nvme_qpair_add_cmd_error_injection ...passed
00:03:50.544    Test: test_nvme_qpair_submit_request ...passed
00:03:50.544    Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed
00:03:50.544    Test: test_nvme_qpair_manual_complete_request ...passed
00:03:50.544    Test: test_nvme_qpair_init_deinit ...passed
00:03:50.544    Test: test_nvme_get_sgl_print_info ...passed
00:03:50.544  
00:03:50.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.544                suites      1      1    n/a      0        0
00:03:50.544                 tests     12     12     12      0        0
00:03:50.544               asserts    154    154    154      0      n/a
00:03:50.544  
00:03:50.544  Elapsed time =    0.000 seconds
00:03:50.544  [2024-12-09 10:18:42.546479] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:03:50.544  [2024-12-09 10:18:42.546619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:03:50.544  [2024-12-09 10:18:42.546661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 813:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (Device not configured) on qpair id 0
00:03:50.544  [2024-12-09 10:18:42.546805] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 813:spdk_nvme_qpair_process_completions: *ERROR*: [, 0] CQ transport error -6 (Device not configured) on qpair id 1
00:03:50.544  [2024-12-09 10:18:42.546843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o
00:03:50.544   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut
00:03:50.544  
00:03:50.544  
00:03:50.544       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.544       http://cunit.sourceforge.net/
00:03:50.544  
00:03:50.544  
00:03:50.544  Suite: nvme_pcie
00:03:50.544    Test: test_prp_list_append ...[2024-12-09 10:18:42.552749] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:03:50.544  [2024-12-09 10:18:42.552977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1271:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800)
00:03:50.544  [2024-12-09 10:18:42.552995] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1261:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed
00:03:50.544  passed
00:03:50.544    Test: test_nvme_pcie_hotplug_monitor ...passed
00:03:50.544    Test: test_shadow_doorbell_update ...passed
00:03:50.544    Test: test_build_contig_hw_sgl_request ...passed
00:03:50.544    Test: test_nvme_pcie_qpair_build_metadata ...passed
00:03:50.544    Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed
00:03:50.544    Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed
00:03:50.544    Test: test_nvme_pcie_qpair_build_contig_request ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_regs_get_set ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_config_pmr ...passed
00:03:50.544    Test: test_nvme_pcie_ctrlr_map_io_pmr ...[2024-12-09 10:18:42.553050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:03:50.544  [2024-12-09 10:18:42.553075] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries
00:03:50.544  [2024-12-09 10:18:42.553156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1242:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned
00:03:50.544  [2024-12-09 10:18:42.553188] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues.
00:03:50.544  [2024-12-09 10:18:42.553205] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value
00:03:50.544  [2024-12-09 10:18:42.553222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled
00:03:50.544  [2024-12-09 10:18:42.553236] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller
00:03:50.544  passed
00:03:50.544  
00:03:50.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.544                suites      1      1    n/a      0        0
00:03:50.544                 tests     14     14     14      0        0
00:03:50.544               asserts    235    235    235      0      n/a
00:03:50.544  
00:03:50.544  Elapsed time =    0.000 seconds
00:03:50.544   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut
00:03:50.544  
00:03:50.544  
00:03:50.544       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.544       http://cunit.sourceforge.net/
00:03:50.544  
00:03:50.544  
00:03:50.544  Suite: nvme_ns_cmd
00:03:50.544    Test: nvme_poll_group_create_test ...passed
00:03:50.544    Test: nvme_poll_group_add_remove_test ...[2024-12-09 10:18:42.560095] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_poll_group.c: 189:spdk_nvme_poll_group_add: *ERROR*: Queue pair without interrupts cannot be added to poll group
00:03:50.544  passed
00:03:50.544    Test: nvme_poll_group_process_completions ...passed
00:03:50.544    Test: nvme_poll_group_destroy_test ...passed
00:03:50.544    Test: nvme_poll_group_get_free_stats ...passed
00:03:50.544  
00:03:50.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.544                suites      1      1    n/a      0        0
00:03:50.544                 tests      5      5      5      0        0
00:03:50.544               asserts    103    103    103      0      n/a
00:03:50.544  
00:03:50.544  Elapsed time =    0.000 seconds
00:03:50.544   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut
00:03:50.544  
00:03:50.544  
00:03:50.544       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.544       http://cunit.sourceforge.net/
00:03:50.544  
00:03:50.544  
00:03:50.544  Suite: nvme_quirks
00:03:50.544    Test: test_nvme_quirks_striping ...passed
00:03:50.544  
00:03:50.544  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.544                suites      1      1    n/a      0        0
00:03:50.544                 tests      1      1      1      0        0
00:03:50.544               asserts      5      5      5      0      n/a
00:03:50.544  
00:03:50.544  Elapsed time =    0.000 seconds
00:03:50.544   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut
00:03:50.544  
00:03:50.544  
00:03:50.544       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.544       http://cunit.sourceforge.net/
00:03:50.544  
00:03:50.544  
00:03:50.544  Suite: nvme_tcp
00:03:50.544    Test: test_nvme_tcp_pdu_set_data_buf ...passed
00:03:50.544    Test: test_nvme_tcp_build_iovs ...passed
00:03:50.544    Test: test_nvme_tcp_build_sgl_request ...passed
00:03:50.544    Test: test_nvme_tcp_pdu_set_data_buf_with_md ...[2024-12-09 10:18:42.571464] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 791:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x8204aa900, and the iovcnt=16, remaining_size=28672
00:03:50.544  passed
00:03:50.544    Test: test_nvme_tcp_build_iovs_with_md ...passed
00:03:50.544    Test: test_nvme_tcp_req_complete_safe ...passed
00:03:50.544    Test: test_nvme_tcp_req_get ...passed
00:03:50.544    Test: test_nvme_tcp_req_init ...passed
00:03:50.544    Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed
00:03:50.544    Test: test_nvme_tcp_qpair_write_pdu ...passed
00:03:50.544    Test: test_nvme_tcp_qpair_set_recv_state ...passed
00:03:50.544    Test: test_nvme_tcp_alloc_reqs ...passed
00:03:50.544    Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed
00:03:50.544    Test: test_nvme_tcp_pdu_ch_handle ...passed
00:03:50.544    Test: test_nvme_tcp_qpair_connect_sock ...[2024-12-09 10:18:42.571839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(7) to be set
00:03:50.544  [2024-12-09 10:18:42.571899] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.571928] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1133:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x8204abc40
00:03:50.544  [2024-12-09 10:18:42.571946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1193:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0
00:03:50.544  [2024-12-09 10:18:42.571961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.571977] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1143:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated
00:03:50.544  [2024-12-09 10:18:42.571992] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572007] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00
00:03:50.544  [2024-12-09 10:18:42.572022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572037] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572068] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572085] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572100] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.544  [2024-12-09 10:18:42.572155] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3
00:03:50.545  [2024-12-09 10:18:42.572171] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:03:50.545  passed
00:03:50.545    Test: test_nvme_tcp_qpair_icreq_send ...passed
00:03:50.545    Test: test_nvme_tcp_c2h_payload_handle ...passed
00:03:50.545    Test: test_nvme_tcp_icresp_handle ...passed
00:03:50.545    Test: test_nvme_tcp_pdu_payload_handle ...passed
00:03:50.545    Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed
00:03:50.545    Test: test_nvme_tcp_ctrlr_connect_qpair ...[2024-12-09 10:18:42.635326] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2245:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed
00:03:50.545  [2024-12-09 10:18:42.635398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1301:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8204ac078): PDU Sequence Error
00:03:50.545  [2024-12-09 10:18:42.635412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1476:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1
00:03:50.545  [2024-12-09 10:18:42.635423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1484:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048
00:03:50.545  [2024-12-09 10:18:42.635432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.545  [2024-12-09 10:18:42.635441] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1492:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64
00:03:50.545  [2024-12-09 10:18:42.635450] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(6) to be set
00:03:50.545  [2024-12-09 10:18:42.635459] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204ac4b0 is same with the state(0) to be set
00:03:50.545  [2024-12-09 10:18:42.635472] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1301:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x8204ac078): PDU Sequence Error
00:03:50.545  [2024-12-09 10:18:42.635490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1553:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x8204ac4b0
00:03:50.545  passed
00:03:50.545    Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-12-09 10:18:42.635525] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 357:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x8204aa210, errno=0, rc=0
00:03:50.545  [2024-12-09 10:18:42.635536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204aa210 is same with the state(6) to be set
00:03:50.545  [2024-12-09 10:18:42.635544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8204aa210 is same with the state(6) to be set
00:03:50.545  [2024-12-09 10:18:42.635589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2086:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8204aa210 (0): No error: 0
00:03:50.545  passed
00:03:50.545    Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-12-09 10:18:42.635599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2086:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8204aa210 (0): No error: 0
00:03:50.545  passed
00:03:50.545    Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed
00:03:50.545    Test: test_nvme_tcp_poll_group_get_stats ...passed
00:03:50.545    Test: test_nvme_tcp_ctrlr_construct ...passed
00:03:50.545    Test: test_nvme_tcp_qpair_submit_request ...[2024-12-09 10:18:42.691161] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2437:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:03:50.545  [2024-12-09 10:18:42.691215] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2437:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:03:50.545  [2024-12-09 10:18:42.691243] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:50.545  [2024-12-09 10:18:42.691250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2900:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:50.545  [2024-12-09 10:18:42.691289] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2437:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:03:50.545  [2024-12-09 10:18:42.691295] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:03:50.545  [2024-12-09 10:18:42.691304] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2233:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254
00:03:50.545  [2024-12-09 10:18:42.691310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:03:50.545  [2024-12-09 10:18:42.691322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x37debaa6c000 with addr=192.168.1.78, port=23
00:03:50.545  [2024-12-09 10:18:42.691328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair
00:03:50.545  [2024-12-09 10:18:42.691343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 791:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x37debaa37180, and the iovcnt=1, remaining_size=1024
00:03:50.545  passed
00:03:50.545  
00:03:50.545  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.545                suites      1      1    n/a      0        0
00:03:50.545                 tests     27     27     27      0        0
00:03:50.545               asserts    624    624    624      0      n/a
00:03:50.545  
00:03:50.545  Elapsed time =    0.047 seconds
00:03:50.545  [2024-12-09 10:18:42.691349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 977:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed
00:03:50.545   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut
00:03:50.805  
00:03:50.805  
00:03:50.805       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.805       http://cunit.sourceforge.net/
00:03:50.805  
00:03:50.805  
00:03:50.805  Suite: nvme_transport
00:03:50.805    Test: test_nvme_get_transport ...passed
00:03:50.806    Test: test_nvme_transport_poll_group_connect_qpair ...passed
00:03:50.806    Test: test_nvme_transport_poll_group_disconnect_qpair ...passed
00:03:50.806    Test: test_nvme_transport_poll_group_add_remove ...passed
00:03:50.806    Test: test_ctrlr_get_memory_domains ...passed
00:03:50.806  
00:03:50.806  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.806                suites      1      1    n/a      0        0
00:03:50.806                 tests      5      5      5      0        0
00:03:50.806               asserts     28     28     28      0      n/a
00:03:50.806  
00:03:50.806  Elapsed time =    0.000 seconds
00:03:50.806   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut
00:03:50.806  
00:03:50.806  
00:03:50.806       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.806       http://cunit.sourceforge.net/
00:03:50.806  
00:03:50.806  
00:03:50.806  Suite: nvme_io_msg
00:03:50.806    Test: test_nvme_io_msg_send ...passed
00:03:50.806    Test: test_nvme_io_msg_process ...passed
00:03:50.806    Test: test_nvme_io_msg_ctrlr_register_unregister ...passed
00:03:50.806  
00:03:50.806  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.806                suites      1      1    n/a      0        0
00:03:50.806                 tests      3      3      3      0        0
00:03:50.806               asserts     56     56     56      0      n/a
00:03:50.806  
00:03:50.806  Elapsed time =    0.000 seconds
00:03:50.806   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut
00:03:50.806  
00:03:50.806  
00:03:50.806       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.806       http://cunit.sourceforge.net/
00:03:50.806  
00:03:50.806  
00:03:50.806  Suite: nvme_pcie_common
00:03:50.806    Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-12-09 10:18:42.714915] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 112:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range!
00:03:50.806  passed
00:03:50.806    Test: test_nvme_pcie_qpair_construct_destroy ...passed
00:03:50.806    Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed
00:03:50.806    Test: test_nvme_pcie_ctrlr_connect_qpair ...passed
00:03:50.806    Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-12-09 10:18:42.715127] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 541:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed!
00:03:50.806  [2024-12-09 10:18:42.715140] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 494:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq!
00:03:50.806  [2024-12-09 10:18:42.715149] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 588:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq
00:03:50.806  passed
00:03:50.806    Test: test_nvme_pcie_poll_group_get_stats ...passed
00:03:50.806  
00:03:50.806  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.806                suites      1      1    n/a      0        0
00:03:50.806                 tests      6      6      6      0        0
00:03:50.806               asserts    148    148    148      0      n/a
00:03:50.806  
00:03:50.806  Elapsed time =    0.000 seconds
00:03:50.806  [2024-12-09 10:18:42.715239] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:50.806  [2024-12-09 10:18:42.715248] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1851:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:50.806   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut
00:03:50.806  
00:03:50.806  
00:03:50.806       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.806       http://cunit.sourceforge.net/
00:03:50.806  
00:03:50.806  
00:03:50.806  Suite: nvme_fabric
00:03:50.806    Test: test_nvme_fabric_prop_set_cmd ...passed
00:03:50.806    Test: test_nvme_fabric_prop_get_cmd ...passed
00:03:50.806    Test: test_nvme_fabric_get_discovery_log_page ...passed
00:03:50.806    Test: test_nvme_fabric_discover_probe ...passed
00:03:50.806    Test: test_nvme_fabric_qpair_connect ...passed
00:03:50.806  
00:03:50.806  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.806                suites      1      1    n/a      0        0
00:03:50.806                 tests      5      5      5      0        0
00:03:50.806               asserts     60     60     60      0      n/a
00:03:50.806  
00:03:50.806  Elapsed time =    0.000 seconds
00:03:50.806  [2024-12-09 10:18:42.721156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 606:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1
00:03:50.806   10:18:42 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut
00:03:50.806  
00:03:50.806  
00:03:50.806       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.806       http://cunit.sourceforge.net/
00:03:50.806  
00:03:50.806  
00:03:50.806  Suite: nvme_opal
00:03:50.806    Test: test_opal_nvme_security_recv_send_done ...passed
00:03:50.806    Test: test_opal_add_short_atom_header ...[2024-12-09 10:18:42.726661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer.
00:03:50.806  passed
00:03:50.806  
00:03:50.806  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:50.806                suites      1      1    n/a      0        0
00:03:50.806                 tests      2      2      2      0        0
00:03:50.806               asserts     22     22     22      0      n/a
00:03:50.806  
00:03:50.806  Elapsed time =    0.000 seconds
00:03:50.806  
00:03:50.806  real	0m0.431s
00:03:50.806  user	0m0.063s
00:03:50.806  sys	0m0.134s
00:03:50.806   10:18:42 unittest.unittest_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:50.806   10:18:42 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x
00:03:50.806  ************************************
00:03:50.806  END TEST unittest_nvme
00:03:50.806  ************************************
00:03:50.806   10:18:42 unittest -- unit/unittest.sh@231 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:03:50.806   10:18:42 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:50.806   10:18:42 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:50.806   10:18:42 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:50.806  ************************************
00:03:50.806  START TEST unittest_log
00:03:50.806  ************************************
00:03:50.806   10:18:42 unittest.unittest_log -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut
00:03:50.806  
00:03:50.806  
00:03:50.806       CUnit - A unit testing framework for C - Version 2.1-3
00:03:50.806       http://cunit.sourceforge.net/
00:03:50.806  
00:03:50.806  
00:03:50.806  Suite: log
00:03:50.806    Test: log_test ...[2024-12-09 10:18:42.787849] log_ut.c:  56:log_test: *WARNING*: log warning unit test
00:03:50.806  passed
00:03:50.806    Test: deprecation ...[2024-12-09 10:18:42.788041] log_ut.c:  57:log_test: *DEBUG*: log test
00:03:50.806  log dump test:
00:03:50.806  00000000  6c 6f 67 20 64 75 6d 70                            log dump
00:03:50.806  spdk dump test:
00:03:50.806  00000000  73 70 64 6b 20 64 75 6d  70                        spdk dump
00:03:50.806  spdk dump test:
00:03:50.806  00000000  73 70 64 6b 20 64 75 6d  70 20 31 36 20 6d 6f 72  spdk dump 16 mor
00:03:50.806  00000010  65 20 63 68 61 72 73                              e chars
00:03:51.748  passed
00:03:51.748    Test: log_ext_test ...passed
00:03:51.749  
00:03:51.749  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:51.749                suites      1      1    n/a      0        0
00:03:51.749                 tests      3      3      3      0        0
00:03:51.749               asserts     77     77     77      0      n/a
00:03:51.749  
00:03:51.749  Elapsed time =    0.000 seconds
00:03:51.749  
00:03:51.749  real	0m1.013s
00:03:51.749  user	0m0.005s
00:03:51.749  sys	0m0.004s
00:03:51.749   10:18:43 unittest.unittest_log -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:51.749  ************************************
00:03:51.749  END TEST unittest_log
00:03:51.749  ************************************
00:03:51.749   10:18:43 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x
00:03:51.749   10:18:43 unittest -- unit/unittest.sh@232 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:03:51.749   10:18:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:51.749   10:18:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:51.749   10:18:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:51.749  ************************************
00:03:51.749  START TEST unittest_lvol
00:03:51.749  ************************************
00:03:51.749   10:18:43 unittest.unittest_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut
00:03:51.749  
00:03:51.749  
00:03:51.749       CUnit - A unit testing framework for C - Version 2.1-3
00:03:51.749       http://cunit.sourceforge.net/
00:03:51.749  
00:03:51.749  
00:03:51.749  Suite: lvol
00:03:51.749    Test: lvs_init_unload_success ...[2024-12-09 10:18:43.869577] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 889:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store
00:03:51.749  passed
00:03:51.749    Test: lvs_init_destroy_success ...passed
00:03:51.749    Test: lvs_init_opts_success ...passed
00:03:51.749    Test: lvs_unload_lvs_is_null_fail ...passed
00:03:51.749    Test: lvs_names ...[2024-12-09 10:18:43.869828] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 959:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store
00:03:51.749  [2024-12-09 10:18:43.869858] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 879:spdk_lvs_unload: *ERROR*: Lvol store is NULL
00:03:51.749  [2024-12-09 10:18:43.869873] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 723:spdk_lvs_init: *ERROR*: Name must be between 1 and 63 characters
00:03:51.749  passed
00:03:51.749    Test: lvol_create_destroy_success ...passed
00:03:51.749    Test: lvol_create_fail ...passed
00:03:51.749    Test: lvol_destroy_fail ...[2024-12-09 10:18:43.869890] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 723:spdk_lvs_init: *ERROR*: Name must be between 1 and 63 characters
00:03:51.749  [2024-12-09 10:18:43.869912] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 733:spdk_lvs_init: *ERROR*: lvolstore with name x already exists
00:03:51.749  [2024-12-09 10:18:43.869971] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 691:spdk_lvs_init: *ERROR*: Blobstore device does not exist
00:03:51.749  [2024-12-09 10:18:43.869987] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1187:spdk_lvol_create: *ERROR*: lvol store does not exist
00:03:51.749  passed
00:03:51.749    Test: lvol_close ...passed
00:03:51.749    Test: lvol_resize ...passed
00:03:51.749    Test: lvol_set_read_only ...passed
00:03:51.749    Test: test_lvs_load ...passed
00:03:51.749    Test: lvols_load ...passed
00:03:51.749    Test: lvol_open ...passed
00:03:51.749    Test: lvol_snapshot ...passed
00:03:51.749    Test: lvol_snapshot_fail ...passed
00:03:51.749    Test: lvol_clone ...passed
00:03:51.749    Test: lvol_clone_fail ...passed
00:03:51.749    Test: lvol_iter_clones ...passed
00:03:51.749    Test: lvol_refcnt ...passed
00:03:51.749    Test: lvol_names ...passed
00:03:51.749    Test: lvol_create_thin_provisioned ...passed
00:03:51.749    Test: lvol_rename ...passed
00:03:51.749    Test: lvs_rename ...passed
00:03:51.749    Test: lvol_inflate ...passed
00:03:51.749    Test: lvol_decouple_parent ...passed
00:03:51.749    Test: lvol_get_xattr ...passed
00:03:51.749    Test: lvol_esnap_reload ...passed
00:03:51.749    Test: lvol_esnap_create_bad_args ...[2024-12-09 10:18:43.870018] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1023:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal
00:03:51.749  [2024-12-09 10:18:43.870041] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1611:spdk_lvol_close: *ERROR*: lvol does not exist
00:03:51.749  [2024-12-09 10:18:43.870052] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 992:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol
00:03:51.749  [2024-12-09 10:18:43.870109] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value
00:03:51.749  [2024-12-09 10:18:43.870121] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options
00:03:51.749  [2024-12-09 10:18:43.870146] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:03:51.749  [2024-12-09 10:18:43.870177] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list
00:03:51.749  [2024-12-09 10:18:43.870256] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists
00:03:51.749  [2024-12-09 10:18:43.870307] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists
00:03:51.749  [2024-12-09 10:18:43.870351] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1569:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol f809d16f-b616-11ef-9b05-d5e34e08fe3b because it is still open
00:03:51.749  [2024-12-09 10:18:43.870371] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1153:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:03:51.749  [2024-12-09 10:18:43.870386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:03:51.749  [2024-12-09 10:18:43.870409] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1166:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created
00:03:51.749  [2024-12-09 10:18:43.870450] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:03:51.749  [2024-12-09 10:18:43.870468] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1521:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs
00:03:51.749  [2024-12-09 10:18:43.870497] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 766:lvs_rename_cb: *ERROR*: Lvol store rename operation failed
00:03:51.749  [2024-12-09 10:18:43.870520] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1655:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:03:51.749  [2024-12-09 10:18:43.870542] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1655:lvol_inflate_cb: *ERROR*: Could not inflate lvol
00:03:51.749  [2024-12-09 10:18:43.870585] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1242:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist
00:03:51.749  [2024-12-09 10:18:43.870598] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1153:lvs_verify_lvol_name: *ERROR*: Name has no null terminator.
00:03:51.749  passed
00:03:51.749    Test: lvol_esnap_create_delete ...passed
00:03:51.749    Test: lvol_esnap_load_esnaps ...passed
00:03:51.749    Test: lvol_esnap_missing ...passed
00:03:51.749    Test: lvol_esnap_hotplug ...
00:03:51.749  [2024-12-09 10:18:43.870611] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1257:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576
00:03:51.749  [2024-12-09 10:18:43.870627] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists
00:03:51.749  [2024-12-09 10:18:43.870648] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists
00:03:51.749  [2024-12-09 10:18:43.870681] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1830:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context
00:03:51.749  [2024-12-09 10:18:43.870706] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:03:51.749  [2024-12-09 10:18:43.870718] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1159:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists
00:03:51.749  	lvol_esnap_hotplug scenario 0: PASS - one missing, happy path
00:03:51.749  	lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set
00:03:51.749  	lvol_esnap_hotplug scenario 2: PASS - one missing, cb returns -ENOMEM
00:03:51.749  	lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path
00:03:51.749  	lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM
00:03:51.749  	lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM
00:03:51.749  	lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path
00:03:51.749  	lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing
00:03:51.749  	lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path
00:03:51.749  	lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing
00:03:51.749  	lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing
00:03:51.749  	lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing
00:03:51.749  	lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing
00:03:51.749  passed
00:03:51.749    Test: lvol_get_by ...passed
00:03:51.749    Test: lvol_shallow_copy ...passed
00:03:51.749    Test: lvol_set_parent ...passed
00:03:51.749    Test: lvol_set_external_parent ...passed
00:03:51.749  
00:03:51.749  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:51.749                suites      1      1    n/a      0        0
00:03:51.749                 tests     37     37     37      0        0
00:03:51.749               asserts   1505   1505   1505      0      n/a
00:03:51.749  
00:03:51.749  Elapsed time =    0.000 seconds
00:03:51.749  [2024-12-09 10:18:43.870813] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2060:lvs_esnap_degraded_hotplug: *ERROR*: lvol f809e368-b616-11ef-9b05-d5e34e08fe3b: failed to create esnap bs_dev: error -12
00:03:51.749  [2024-12-09 10:18:43.870862] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2060:lvs_esnap_degraded_hotplug: *ERROR*: lvol f809e53a-b616-11ef-9b05-d5e34e08fe3b: failed to create esnap bs_dev: error -12
00:03:51.749  [2024-12-09 10:18:43.870891] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2060:lvs_esnap_degraded_hotplug: *ERROR*: lvol f809e67f-b616-11ef-9b05-d5e34e08fe3b: failed to create esnap bs_dev: error -12
00:03:51.749  [2024-12-09 10:18:43.871087] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2271:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871099] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2278:spdk_lvol_shallow_copy: *ERROR*: lvol f809ee2a-b616-11ef-9b05-d5e34e08fe3b shallow copy, ext_dev must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871129] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2335:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871140] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2341:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871164] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2390:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871176] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2396:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL
00:03:51.749  [2024-12-09 10:18:43.871187] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2403:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID
00:03:51.749  
00:03:51.749  real	0m0.010s
00:03:51.749  user	0m0.001s
00:03:51.749  sys	0m0.008s
00:03:51.750   10:18:43 unittest.unittest_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:51.750   10:18:43 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x
00:03:51.750  ************************************
00:03:51.750  END TEST unittest_lvol
00:03:51.750  ************************************
00:03:52.013   10:18:43 unittest -- unit/unittest.sh@233 -- # [[ y == y ]]
00:03:52.013   10:18:43 unittest -- unit/unittest.sh@234 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:03:52.013   10:18:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.013   10:18:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.013   10:18:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.013  ************************************
00:03:52.013  START TEST unittest_nvme_rdma
00:03:52.013  ************************************
00:03:52.013   10:18:43 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut
00:03:52.013  
00:03:52.013  
00:03:52.013       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.013       http://cunit.sourceforge.net/
00:03:52.013  
00:03:52.013  
00:03:52.013  Suite: nvme_rdma
00:03:52.013    Test: test_nvme_rdma_build_sgl_request ...[2024-12-09 10:18:43.945804] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1421:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34
00:03:52.013  passed
00:03:52.013    Test: test_nvme_rdma_build_sgl_inline_request ...passed
00:03:52.013    Test: test_nvme_rdma_build_contig_request ...passed
00:03:52.013    Test: test_nvme_rdma_build_contig_inline_request ...passed
00:03:52.013    Test: test_nvme_rdma_create_reqs ...passed
00:03:52.013    Test: test_nvme_rdma_create_rsps ...passed
00:03:52.013    Test: test_nvme_rdma_ctrlr_create_qpair ...[2024-12-09 10:18:43.946044] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:03:52.013  [2024-12-09 10:18:43.946063] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1665:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60)
00:03:52.013  [2024-12-09 10:18:43.946089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1561:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215
00:03:52.013  [2024-12-09 10:18:43.946108] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 952:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs
00:03:52.013  [2024-12-09 10:18:43.946138] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 870:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls
00:03:52.013  [2024-12-09 10:18:43.946157] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1989:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2.
00:03:52.013  [2024-12-09 10:18:43.946169] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1989:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2.
00:03:52.013  passed
00:03:52.013    Test: test_nvme_rdma_poller_create ...passed
00:03:52.013    Test: test_nvme_rdma_qpair_process_cm_event ...passed
00:03:52.013    Test: test_nvme_rdma_ctrlr_construct ...passed
00:03:52.013    Test: test_nvme_rdma_req_put_and_get ...passed
00:03:52.013    Test: test_nvme_rdma_req_init ...passed
00:03:52.013    Test: test_nvme_rdma_validate_cm_event ...passed
00:03:52.013    Test: test_nvme_rdma_qpair_init ...passed
00:03:52.013    Test: test_nvme_rdma_qpair_submit_request ...passed
00:03:52.013    Test: test_rdma_ctrlr_get_memory_domains ...passed
00:03:52.013    Test: test_rdma_get_memory_translation ...[2024-12-09 10:18:43.946200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 476:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255]
00:03:52.013  [2024-12-09 10:18:43.946265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0)
00:03:52.013  [2024-12-09 10:18:43.946279] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 570:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10)
00:03:52.013  [2024-12-09 10:18:43.946307] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1410:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0
00:03:52.013  [2024-12-09 10:18:43.946319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1421:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1
00:03:52.013  passed
00:03:52.013    Test: test_get_rdma_qpair_from_wc ...passed
00:03:52.013    Test: test_nvme_rdma_ctrlr_get_max_sges ...passed
00:03:52.013    Test: test_nvme_rdma_poll_group_get_stats ...passed
00:03:52.013    Test: test_nvme_rdma_qpair_set_poller ...[2024-12-09 10:18:43.946342] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3558:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:52.013  [2024-12-09 10:18:43.946354] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3558:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer
00:03:52.013  [2024-12-09 10:18:43.946379] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3252:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0.
00:03:52.013  [2024-12-09 10:18:43.946391] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3298:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef
00:03:52.013  [2024-12-09 10:18:43.946402] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 673:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820491f60 on poll group 0x2f6d04270000
00:03:52.014  [2024-12-09 10:18:43.946414] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3252:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0.
00:03:52.014  [2024-12-09 10:18:43.946426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3298:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0
00:03:52.014  [2024-12-09 10:18:43.946437] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 673:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820491f60 on poll group 0x2f6d04270000
00:03:52.014  [2024-12-09 10:18:43.946494] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 651:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0
00:03:52.014  passed
00:03:52.014  
00:03:52.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.014                suites      1      1    n/a      0        0
00:03:52.014                 tests     21     21     21      0        0
00:03:52.014               asserts    395    395    395      0      n/a
00:03:52.014  
00:03:52.014  Elapsed time =    0.000 seconds
00:03:52.014  
00:03:52.014  real	0m0.007s
00:03:52.014  user	0m0.000s
00:03:52.014  sys	0m0.008s
00:03:52.014   10:18:43 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.014   10:18:43 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  END TEST unittest_nvme_rdma
00:03:52.014  ************************************
00:03:52.014   10:18:43 unittest -- unit/unittest.sh@235 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:03:52.014   10:18:43 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.014   10:18:43 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.014   10:18:43 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  START TEST unittest_nvmf_transport
00:03:52.014  ************************************
00:03:52.014   10:18:43 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut
00:03:52.014  
00:03:52.014  
00:03:52.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.014       http://cunit.sourceforge.net/
00:03:52.014  
00:03:52.014  
00:03:52.014  Suite: nvmf
00:03:52.014    Test: test_spdk_nvmf_transport_create ...[2024-12-09 10:18:43.985329] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable.
00:03:52.014  [2024-12-09 10:18:43.985549] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0
00:03:52.014  passed
00:03:52.014    Test: test_nvmf_transport_poll_group_create ...[2024-12-09 10:18:43.985568] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536
00:03:52.014  [2024-12-09 10:18:43.985585] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB
00:03:52.014  passed
00:03:52.014    Test: test_spdk_nvmf_transport_opts_init ...[2024-12-09 10:18:43.985624] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 834:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable.
00:03:52.014  passed
00:03:52.014    Test: test_spdk_nvmf_transport_listen_ext ...passed
00:03:52.014  
00:03:52.014  [2024-12-09 10:18:43.985637] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 839:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL
00:03:52.014  [2024-12-09 10:18:43.985648] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 844:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value
00:03:52.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.014                suites      1      1    n/a      0        0
00:03:52.014                 tests      4      4      4      0        0
00:03:52.014               asserts     49     49     49      0      n/a
00:03:52.014  
00:03:52.014  Elapsed time =    0.000 seconds
00:03:52.014  
00:03:52.014  real	0m0.006s
00:03:52.014  user	0m0.006s
00:03:52.014  sys	0m0.000s
00:03:52.014   10:18:43 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.014   10:18:43 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  END TEST unittest_nvmf_transport
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest -- unit/unittest.sh@236 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  START TEST unittest_rdma
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest.unittest_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut
00:03:52.014  
00:03:52.014  
00:03:52.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.014       http://cunit.sourceforge.net/
00:03:52.014  
00:03:52.014  
00:03:52.014  Suite: rdma_common
00:03:52.014    Test: test_spdk_rdma_pd ...[2024-12-09 10:18:44.018477] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:03:52.014  [2024-12-09 10:18:44.018650] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 400:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD
00:03:52.014  passed
00:03:52.014  
00:03:52.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.014                suites      1      1    n/a      0        0
00:03:52.014                 tests      1      1      1      0        0
00:03:52.014               asserts     31     31     31      0      n/a
00:03:52.014  
00:03:52.014  Elapsed time =    0.000 seconds
00:03:52.014  
00:03:52.014  real	0m0.004s
00:03:52.014  user	0m0.000s
00:03:52.014  sys	0m0.008s
00:03:52.014   10:18:44 unittest.unittest_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.014   10:18:44 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  END TEST unittest_rdma
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest -- unit/unittest.sh@237 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  START TEST unittest_nvmf_rdma
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut
00:03:52.014  
00:03:52.014  
00:03:52.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.014       http://cunit.sourceforge.net/
00:03:52.014  
00:03:52.014  
00:03:52.014  Suite: nvmf
00:03:52.014    Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-12-09 10:18:44.067521] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1865:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000
00:03:52.014  passed
00:03:52.014    Test: test_spdk_nvmf_rdma_request_process ...[2024-12-09 10:18:44.067685] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1915:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0
00:03:52.014  [2024-12-09 10:18:44.067697] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1915:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000
00:03:52.014  passed
00:03:52.014    Test: test_nvmf_rdma_get_optimal_poll_group ...passed
00:03:52.014    Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed
00:03:52.014    Test: test_nvmf_rdma_opts_init ...passed
00:03:52.014    Test: test_nvmf_rdma_request_free_data ...passed
00:03:52.014    Test: test_nvmf_rdma_resources_create ...passed
00:03:52.014    Test: test_nvmf_rdma_qpair_compare ...passed
00:03:52.014    Test: test_nvmf_rdma_resize_cq ...[2024-12-09 10:18:44.068225] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 956:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0
00:03:52.014  Using CQ of insufficient size may lead to CQ overrun
00:03:52.014  [2024-12-09 10:18:44.068236] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 961:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3)
00:03:52.014  passed
00:03:52.014  
00:03:52.014  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.014                suites      1      1    n/a      0        0
00:03:52.014                 tests      9      9      9      0        0
00:03:52.014               asserts    579    579    579      0      n/a
00:03:52.014  
00:03:52.014  Elapsed time =    0.000 seconds
00:03:52.014  [2024-12-09 10:18:44.068273] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 968:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0
00:03:52.014  
00:03:52.014  real	0m0.007s
00:03:52.014  user	0m0.000s
00:03:52.014  sys	0m0.008s
00:03:52.014   10:18:44 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.014  ************************************
00:03:52.014  END TEST unittest_nvmf_rdma
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x
00:03:52.014   10:18:44 unittest -- unit/unittest.sh@240 -- # [[ n == y ]]
00:03:52.014   10:18:44 unittest -- unit/unittest.sh@244 -- # run_test unittest_nvmf unittest_nvmf
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.014   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.014  ************************************
00:03:52.014  START TEST unittest_nvmf
00:03:52.014  ************************************
00:03:52.014   10:18:44 unittest.unittest_nvmf -- common/autotest_common.sh@1129 -- # unittest_nvmf
00:03:52.014   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut
00:03:52.014  
00:03:52.014  
00:03:52.014       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.014       http://cunit.sourceforge.net/
00:03:52.014  
00:03:52.014  
00:03:52.014  Suite: nvmf
00:03:52.014    Test: test_get_log_page ...[2024-12-09 10:18:44.107347] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2658:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2
00:03:52.014  passed
00:03:52.014    Test: test_process_fabrics_cmd ...passed
00:03:52.014    Test: test_connect ...[2024-12-09 10:18:44.107557] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4891:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT
00:03:52.014  [2024-12-09 10:18:44.107627] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1016:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small
00:03:52.014  [2024-12-09 10:18:44.107643] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 878:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234
00:03:52.015  [2024-12-09 10:18:44.107657] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1055:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated
00:03:52.015  [2024-12-09 10:18:44.107670] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1'
00:03:52.015  [2024-12-09 10:18:44.107682] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 889:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0
00:03:52.015  [2024-12-09 10:18:44.107695] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 897:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31)
00:03:52.015  [2024-12-09 10:18:44.107707] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 903:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63)
00:03:52.015  [2024-12-09 10:18:44.107720] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 930:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234).
00:03:52.015  passed
00:03:52.015    Test: test_get_ns_id_desc_list ...passed
00:03:52.015    Test: test_identify_ns ...passed
00:03:52.015    Test: test_identify_ns_iocs_specific ...passed
00:03:52.015    Test: test_reservation_write_exclusive ...[2024-12-09 10:18:44.107737] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff
00:03:52.015  [2024-12-09 10:18:44.107752] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 679:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller
00:03:52.015  [2024-12-09 10:18:44.107777] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 685:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled
00:03:52.015  [2024-12-09 10:18:44.107793] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 692:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3
00:03:52.015  [2024-12-09 10:18:44.107806] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 699:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3
00:03:52.015  [2024-12-09 10:18:44.107820] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 723:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2
00:03:52.015  [2024-12-09 10:18:44.107839] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 296:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0)
00:03:52.015  [2024-12-09 10:18:44.107859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 809:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0)
00:03:52.015  [2024-12-09 10:18:44.107873] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 809:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0)
00:03:52.015  [2024-12-09 10:18:44.107933] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:03:52.015  [2024-12-09 10:18:44.108016] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4
00:03:52.015  [2024-12-09 10:18:44.108049] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295
00:03:52.015  [2024-12-09 10:18:44.108091] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:03:52.015  [2024-12-09 10:18:44.108182] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0
00:03:52.015  passed
00:03:52.015    Test: test_reservation_exclusive_access ...passed
00:03:52.015    Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed
00:03:52.015    Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed
00:03:52.015    Test: test_reservation_notification_log_page ...passed
00:03:52.015    Test: test_get_dif_ctx ...passed
00:03:52.015    Test: test_set_get_features ...passed
00:03:52.015    Test: test_identify_ctrlr ...passed
00:03:52.015    Test: test_identify_ctrlr_iocs_specific ...[2024-12-09 10:18:44.108284] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1652:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:03:52.015  [2024-12-09 10:18:44.108297] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1652:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9
00:03:52.015  [2024-12-09 10:18:44.108309] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1663:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3
00:03:52.015  [2024-12-09 10:18:44.108321] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1739:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit
00:03:52.015  passed
00:03:52.015    Test: test_custom_admin_cmd ...passed
00:03:52.015    Test: test_fused_compare_and_write ...passed
00:03:52.015    Test: test_multi_async_event_reqs ...passed
00:03:52.015    Test: test_get_ana_log_page_one_ns_per_anagrp ...passed
00:03:52.015    Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed
00:03:52.015    Test: test_multi_async_events ...passed
00:03:52.015    Test: test_rae ...passed
00:03:52.015    Test: test_nvmf_ctrlr_create_destruct ...passed
00:03:52.015    Test: test_nvmf_ctrlr_use_zcopy ...passed
00:03:52.015    Test: test_spdk_nvmf_request_zcopy_start ...[2024-12-09 10:18:44.108455] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4398:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations
00:03:52.015  [2024-12-09 10:18:44.108468] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4387:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:03:52.015  [2024-12-09 10:18:44.108480] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4405:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations
00:03:52.015  [2024-12-09 10:18:44.108561] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4891:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT
00:03:52.015  [2024-12-09 10:18:44.108576] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4917:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4
00:03:52.015  passed
00:03:52.015    Test: test_zcopy_read ...passed
00:03:52.015    Test: test_zcopy_write ...passed
00:03:52.015    Test: test_nvmf_property_set ...passed
00:03:52.015    Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed
00:03:52.015    Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed
00:03:52.015    Test: test_nvmf_ctrlr_ns_attachment ...passed
00:03:52.015    Test: test_nvmf_check_qpair_active ...passed
00:03:52.015  
00:03:52.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.015                suites      1      1    n/a      0        0
00:03:52.015                 tests     32     32     32      0        0
00:03:52.015               asserts    996    996    996      0      n/a
00:03:52.015  
00:03:52.015  Elapsed time =    0.000 seconds
00:03:52.015  [2024-12-09 10:18:44.108609] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1950:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:03:52.015  [2024-12-09 10:18:44.108620] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1950:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support
00:03:52.015  [2024-12-09 10:18:44.108634] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1974:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0
00:03:52.015  [2024-12-09 10:18:44.108646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1980:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0
00:03:52.015  [2024-12-09 10:18:44.108658] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1992:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:03:52.015  [2024-12-09 10:18:44.108668] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1992:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02
00:03:52.015  [2024-12-09 10:18:44.108699] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4891:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT
00:03:52.015  [2024-12-09 10:18:44.108711] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4905:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication
00:03:52.015  [2024-12-09 10:18:44.108723] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4917:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0
00:03:52.015  [2024-12-09 10:18:44.108735] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4917:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4
00:03:52.015  [2024-12-09 10:18:44.108746] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4917:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5
00:03:52.015   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut
00:03:52.015  
00:03:52.015  
00:03:52.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.015       http://cunit.sourceforge.net/
00:03:52.015  
00:03:52.015  
00:03:52.015  Suite: nvmf
00:03:52.015    Test: test_get_rw_params ...passed
00:03:52.015    Test: test_get_rw_ext_params ...passed
00:03:52.015    Test: test_lba_in_range ...passed
00:03:52.015    Test: test_get_dif_ctx ...passed
00:03:52.015    Test: test_nvmf_bdev_ctrlr_identify_ns ...passed
00:03:52.015    Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...[2024-12-09 10:18:44.115564] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 522:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch
00:03:52.015  passed
00:03:52.015    Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed
00:03:52.015    Test: test_nvmf_bdev_ctrlr_cmd ...[2024-12-09 10:18:44.115742] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 530:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media
00:03:52.015  [2024-12-09 10:18:44.115758] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 538:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023
00:03:52.015  [2024-12-09 10:18:44.115774] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1041:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media
00:03:52.015  [2024-12-09 10:18:44.115788] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c:1049:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023
00:03:52.015  [2024-12-09 10:18:44.115802] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 476:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media
00:03:52.015  [2024-12-09 10:18:44.115814] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 484:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512
00:03:52.015  passed
00:03:52.015    Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed
00:03:52.015    Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed
00:03:52.015  
00:03:52.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.015                suites      1      1    n/a      0        0
00:03:52.015                 tests     10     10     10      0        0
00:03:52.015               asserts    161    161    161      0      n/a
00:03:52.015  
00:03:52.015  Elapsed time =    0.000 seconds
00:03:52.015  [2024-12-09 10:18:44.115826] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 575:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib
00:03:52.015  [2024-12-09 10:18:44.115837] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 582:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media
00:03:52.015   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut
00:03:52.015  
00:03:52.015  
00:03:52.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.015       http://cunit.sourceforge.net/
00:03:52.015  
00:03:52.015  
00:03:52.015  Suite: nvmf
00:03:52.015    Test: test_discovery_log ...passed
00:03:52.015    Test: test_discovery_log_with_filters ...passed
00:03:52.015  
00:03:52.015  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.015                suites      1      1    n/a      0        0
00:03:52.015                 tests      2      2      2      0        0
00:03:52.015               asserts    238    238    238      0      n/a
00:03:52.015  
00:03:52.015  Elapsed time =    0.000 seconds
00:03:52.015   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut
00:03:52.015  
00:03:52.015  
00:03:52.015       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.015       http://cunit.sourceforge.net/
00:03:52.015  
00:03:52.015  
00:03:52.015  Suite: nvmf
00:03:52.016    Test: nvmf_test_create_subsystem ...[2024-12-09 10:18:44.126614] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix.
00:03:52.016  [2024-12-09 10:18:44.126744] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid
00:03:52.016  [2024-12-09 10:18:44.126759] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long.
00:03:52.016  [2024-12-09 10:18:44.126767] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid
00:03:52.016  [2024-12-09 10:18:44.126775] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter.
00:03:52.016  [2024-12-09 10:18:44.126782] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid
00:03:52.016  [2024-12-09 10:18:44.126789] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter.
00:03:52.016  [2024-12-09 10:18:44.126796] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid
00:03:52.016  [2024-12-09 10:18:44.126803] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol.
00:03:52.016  [2024-12-09 10:18:44.126810] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid
00:03:52.016  [2024-12-09 10:18:44.126817] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter.
00:03:52.016  [2024-12-09 10:18:44.126824] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid
00:03:52.016  [2024-12-09 10:18:44.126836] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223
00:03:52.016  [2024-12-09 10:18:44.126844] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid
00:03:52.016  passed
00:03:52.016    Test: test_spdk_nvmf_subsystem_add_ns ...passed
00:03:52.016    Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed
00:03:52.016    Test: test_spdk_nvmf_subsystem_set_sn ...passed
00:03:52.016    Test: test_spdk_nvmf_ns_visible ...[2024-12-09 10:18:44.126877] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8.
00:03:52.016  [2024-12-09 10:18:44.126884] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid
00:03:52.016  [2024-12-09 10:18:44.126894] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length
00:03:52.016  [2024-12-09 10:18:44.126901] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid
00:03:52.016  [2024-12-09 10:18:44.126909] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:03:52.016  [2024-12-09 10:18:44.126915] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid
00:03:52.016  [2024-12-09 10:18:44.126923] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly
00:03:52.016  [2024-12-09 10:18:44.126929] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid
00:03:52.016  [2024-12-09 10:18:44.126966] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use
00:03:52.016  [2024-12-09 10:18:44.126975] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2103:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295
00:03:52.016  [2024-12-09 10:18:44.126991] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2242:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace.
00:03:52.016  [2024-12-09 10:18:44.127011] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11
00:03:52.016  passed
00:03:52.016    Test: test_reservation_register ...passed
00:03:52.016    Test: test_reservation_register_with_ptpl ...passed
00:03:52.016    Test: test_reservation_acquire_preempt_1 ...passed
00:03:52.016    Test: test_reservation_acquire_release_with_ptpl ...passed
00:03:52.016    Test: test_reservation_release ...passed
00:03:52.016    Test: test_reservation_unregister_notification ...passed
00:03:52.016    Test: test_reservation_release_notification ...passed
00:03:52.016    Test: test_reservation_release_notification_write_exclusive ...passed
00:03:52.016    Test: test_reservation_clear_notification ...passed
00:03:52.016    Test: test_reservation_preempt_notification ...[2024-12-09 10:18:44.127056] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127069] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3288:nvmf_ns_reservation_register: *ERROR*: No registrant
00:03:52.016  [2024-12-09 10:18:44.127179] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127275] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127290] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127302] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127314] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127326] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  passed
00:03:52.016    Test: test_spdk_nvmf_ns_event ...passed
00:03:52.016    Test: test_nvmf_ns_reservation_add_remove_registrant ...passed
00:03:52.016    Test: test_nvmf_subsystem_add_ctrlr ...passed
00:03:52.016    Test: test_spdk_nvmf_subsystem_add_host ...passed
00:03:52.016    Test: test_nvmf_ns_reservation_report ...passed
00:03:52.016    Test: test_nvmf_nqn_is_valid ...passed
00:03:52.016    Test: test_nvmf_ns_reservation_restore ...passed
00:03:52.016    Test: test_nvmf_subsystem_state_change ...passed
00:03:52.016    Test: test_nvmf_reservation_custom_ops ...passed
00:03:52.016  
00:03:52.016  [2024-12-09 10:18:44.127337] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3232:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1
00:03:52.016  [2024-12-09 10:18:44.127387] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value
00:03:52.016  [2024-12-09 10:18:44.127397] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport
00:03:52.016  [2024-12-09 10:18:44.127412] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3594:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again
00:03:52.016  [2024-12-09 10:18:44.127427] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11
00:03:52.016  [2024-12-09 10:18:44.127434] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:  97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:f8310b65-b616-11ef-9b05-d5e34e08fe3": uuid is not the correct length
00:03:52.016  [2024-12-09 10:18:44.127442] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter.
00:03:52.016  [2024-12-09 10:18:44.127463] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2787:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file
00:03:52.016  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.016                suites      1      1    n/a      0        0
00:03:52.016                 tests     24     24     24      0        0
00:03:52.016               asserts    499    499    499      0      n/a
00:03:52.016  
00:03:52.016  Elapsed time =    0.000 seconds
00:03:52.016   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut
00:03:52.016  
00:03:52.016  
00:03:52.016       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.016       http://cunit.sourceforge.net/
00:03:52.016  
00:03:52.016  
00:03:52.016  Suite: nvmf
00:03:52.016    Test: test_nvmf_tcp_create ...[2024-12-09 10:18:44.137552] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 829:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes
00:03:52.016  passed
00:03:52.016    Test: test_nvmf_tcp_destroy ...passed
00:03:52.016    Test: test_nvmf_tcp_poll_group_create ...passed
00:03:52.016    Test: test_nvmf_tcp_send_c2h_data ...passed
00:03:52.016    Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed
00:03:52.016    Test: test_nvmf_tcp_in_capsule_data_handle ...passed
00:03:52.016    Test: test_nvmf_tcp_qpair_init_mem_resource ...[2024-12-09 10:18:44.145555] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068dcf8 is same with the state(5) to be set
00:03:52.016  passed
00:03:52.016    Test: test_nvmf_tcp_send_c2h_term_req ...passed
00:03:52.016    Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed
00:03:52.016    Test: test_nvmf_tcp_icreq_handle ...passed
00:03:52.016    Test: test_nvmf_tcp_check_xfer_type ...passed
00:03:52.016    Test: test_nvmf_tcp_invalid_sgl ...passed
00:03:52.016    Test: test_nvmf_tcp_pdu_ch_handle ...[2024-12-09 10:18:44.147626] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.016  [2024-12-09 10:18:44.147638] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.016  [2024-12-09 10:18:44.147645] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.016  [2024-12-09 10:18:44.147650] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147655] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147671] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2296:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:03:52.017  [2024-12-09 10:18:44.147677] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147682] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068de18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2296:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1
00:03:52.017  [2024-12-09 10:18:44.147693] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068de18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147698] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147703] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068de18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147709] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147714] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068de18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147725] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2706:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000
00:03:52.017  [2024-12-09 10:18:44.147731] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147736] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068de18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147743] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2423:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x82068d6a8
00:03:52.017  [2024-12-09 10:18:44.147749] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147754] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147760] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2482:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x82068df18
00:03:52.017  [2024-12-09 10:18:44.147765] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  passed
00:03:52.017    Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-12-09 10:18:44.147770] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147776] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2433:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated
00:03:52.017  [2024-12-09 10:18:44.147781] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147786] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147791] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2472:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05
00:03:52.017  [2024-12-09 10:18:44.147797] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147802] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147808] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147813] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147818] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147823] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147829] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147834] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147839] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147844] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147850] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147855] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  [2024-12-09 10:18:44.147861] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1238:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0
00:03:52.017  [2024-12-09 10:18:44.147866] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1791:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82068df18 is same with the state(6) to be set
00:03:52.017  passed
00:03:52.017    Test: test_nvmf_tcp_tls_generate_psk_id ...passed[2024-12-09 10:18:44.153805] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 584:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small!
00:03:52.017  [2024-12-09 10:18:44.153824] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 595:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested!
00:03:52.017  
00:03:52.017    Test: test_nvmf_tcp_tls_generate_retained_psk ...passed
00:03:52.017    Test: test_nvmf_tcp_tls_generate_tls_psk ...passed
00:03:52.017  
00:03:52.017  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.017                suites      1      1    n/a      0        0
00:03:52.017                 tests     17     17     17      0        0
00:03:52.017               asserts    215    215    215      0      n/a
00:03:52.017  
00:03:52.017  Elapsed time =    0.016 seconds
00:03:52.017  [2024-12-09 10:18:44.153920] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 651:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested!
00:03:52.017  [2024-12-09 10:18:44.153927] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 656:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key!
00:03:52.017  [2024-12-09 10:18:44.153972] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 725:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested!
00:03:52.017  [2024-12-09 10:18:44.153978] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 749:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key!
00:03:52.017   10:18:44 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut
00:03:52.017  
00:03:52.017  
00:03:52.017       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.017       http://cunit.sourceforge.net/
00:03:52.017  
00:03:52.017  
00:03:52.017  Suite: nvmf
00:03:52.017    Test: test_nvmf_tgt_create_poll_group ...passed
00:03:52.017  
00:03:52.017  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.017                suites      1      1    n/a      0        0
00:03:52.017                 tests      1      1      1      0        0
00:03:52.017               asserts     16     16     16      0      n/a
00:03:52.017  
00:03:52.017  Elapsed time =    0.000 seconds
00:03:52.017  
00:03:52.017  real	0m0.060s
00:03:52.017  user	0m0.029s
00:03:52.017  sys	0m0.031s
00:03:52.017   10:18:44 unittest.unittest_nvmf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.017  ************************************
00:03:52.017  END TEST unittest_nvmf
00:03:52.017  ************************************
00:03:52.017   10:18:44 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x
00:03:52.280   10:18:44 unittest -- unit/unittest.sh@245 -- # [[ n == y ]]
00:03:52.280   10:18:44 unittest -- unit/unittest.sh@250 -- # [[ n == y ]]
00:03:52.280   10:18:44 unittest -- unit/unittest.sh@254 -- # run_test unittest_scsi unittest_scsi
00:03:52.280   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.280   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.280   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.280  ************************************
00:03:52.280  START TEST unittest_scsi
00:03:52.280  ************************************
00:03:52.280   10:18:44 unittest.unittest_scsi -- common/autotest_common.sh@1129 -- # unittest_scsi
00:03:52.280   10:18:44 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut
00:03:52.280  
00:03:52.280  
00:03:52.280       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.280       http://cunit.sourceforge.net/
00:03:52.280  
00:03:52.280  
00:03:52.280  Suite: dev_suite
00:03:52.280    Test: dev_destruct_null_dev ...passed
00:03:52.280    Test: dev_destruct_zero_luns ...passed
00:03:52.280    Test: dev_destruct_null_lun ...passed
00:03:52.280    Test: dev_destruct_success ...passed
00:03:52.280    Test: dev_construct_num_luns_zero ...passed
00:03:52.280    Test: dev_construct_no_lun_zero ...[2024-12-09 10:18:44.219984] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified
00:03:52.280  passed
00:03:52.280    Test: dev_construct_null_lun ...passed
00:03:52.280    Test: dev_construct_name_too_long ...passed
00:03:52.280    Test: dev_construct_success ...passed
00:03:52.280    Test: dev_construct_success_lun_zero_not_first ...passed
00:03:52.280    Test: dev_queue_mgmt_task_success ...passed
00:03:52.280    Test: dev_queue_task_success ...passed
00:03:52.280    Test: dev_stop_success ...passed
00:03:52.280    Test: dev_add_port_max_ports ...passed
00:03:52.280    Test: dev_add_port_construct_failure1 ...passed
00:03:52.280    Test: dev_add_port_construct_failure2 ...passed
00:03:52.280    Test: dev_add_port_success1 ...passed
00:03:52.280    Test: dev_add_port_success2 ...passed
00:03:52.280    Test: dev_add_port_success3 ...passed
00:03:52.280    Test: dev_find_port_by_id_num_ports_zero ...passed
00:03:52.280    Test: dev_find_port_by_id_id_not_found_failure ...passed
00:03:52.280    Test: dev_find_port_by_id_success ...passed
00:03:52.280    Test: dev_add_lun_bdev_not_found ...passed
00:03:52.280    Test: dev_add_lun_no_free_lun_id ...[2024-12-09 10:18:44.220177] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified
00:03:52.280  [2024-12-09 10:18:44.220195] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0
00:03:52.280  [2024-12-09 10:18:44.220209] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255
00:03:52.280  [2024-12-09 10:18:44.220246] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports
00:03:52.280  [2024-12-09 10:18:44.220259] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c:  49:scsi_port_construct: *ERROR*: port name too long
00:03:52.280  [2024-12-09 10:18:44.220272] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1)
00:03:52.280  passed
00:03:52.280    Test: dev_add_lun_success1 ...passed
00:03:52.280    Test: dev_add_lun_success2 ...passed
00:03:52.280    Test: dev_check_pending_tasks ...passed
00:03:52.280    Test: dev_iterate_luns ...passed
00:03:52.280    Test: dev_find_free_lun ...passed
00:03:52.280  
00:03:52.280  [2024-12-09 10:18:44.220453] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found
00:03:52.280  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.280                suites      1      1    n/a      0        0
00:03:52.280                 tests     29     29     29      0        0
00:03:52.280               asserts     97     97     97      0      n/a
00:03:52.280  
00:03:52.280  Elapsed time =    0.000 seconds
00:03:52.280   10:18:44 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut
00:03:52.280  
00:03:52.280  
00:03:52.280       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.280       http://cunit.sourceforge.net/
00:03:52.280  
00:03:52.280  
00:03:52.280  Suite: lun_suite
00:03:52.280    Test: lun_task_mgmt_execute_abort_task_not_supported ...passed
00:03:52.280    Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-12-09 10:18:44.228192] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported
00:03:52.280  passed
00:03:52.280    Test: lun_task_mgmt_execute_lun_reset ...passed
00:03:52.280    Test: lun_task_mgmt_execute_target_reset ...passed
00:03:52.280    Test: lun_task_mgmt_execute_invalid_case ...[2024-12-09 10:18:44.228370] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported
00:03:52.280  passed
00:03:52.280    Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed
00:03:52.280    Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed
00:03:52.280    Test: lun_append_task_null_lun_not_supported ...passed
00:03:52.280    Test: lun_execute_scsi_task_pending ...passed
00:03:52.280    Test: lun_execute_scsi_task_complete ...passed
00:03:52.280    Test: lun_execute_scsi_task_resize ...passed
00:03:52.280    Test: lun_destruct_success ...passed
00:03:52.280    Test: lun_construct_null_ctx ...passed
00:03:52.280    Test: lun_construct_success ...passed
00:03:52.280    Test: lun_reset_task_wait_scsi_task_complete ...passed
00:03:52.280    Test: lun_reset_task_suspend_scsi_task ...passed
00:03:52.280    Test: lun_check_pending_tasks_only_for_specific_initiator ...passed
00:03:52.280    Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed
00:03:52.280  
00:03:52.280  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.280                suites      1      1    n/a      0        0
00:03:52.280                 tests     18     18     18      0        0
00:03:52.280               asserts    153    153    153      0      n/a
00:03:52.280  
00:03:52.280  Elapsed time =    0.000 seconds
00:03:52.280  [2024-12-09 10:18:44.228394] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported
00:03:52.280  [2024-12-09 10:18:44.228428] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL
00:03:52.280   10:18:44 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut
00:03:52.280  
00:03:52.280  
00:03:52.280       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.280       http://cunit.sourceforge.net/
00:03:52.280  
00:03:52.280  
00:03:52.280  Suite: scsi_suite
00:03:52.280    Test: scsi_init ...passed
00:03:52.280  
00:03:52.280  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.280                suites      1      1    n/a      0        0
00:03:52.280                 tests      1      1      1      0        0
00:03:52.280               asserts      1      1      1      0      n/a
00:03:52.280  
00:03:52.280  Elapsed time =    0.000 seconds
00:03:52.280   10:18:44 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut
00:03:52.280  
00:03:52.280  
00:03:52.280       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.280       http://cunit.sourceforge.net/
00:03:52.280  
00:03:52.280  
00:03:52.280  Suite: translation_suite
00:03:52.280    Test: mode_select_6_test ...passed
00:03:52.280    Test: mode_select_6_test2 ...passed
00:03:52.280    Test: mode_sense_6_test ...passed
00:03:52.280    Test: mode_sense_10_test ...passed
00:03:52.280    Test: inquiry_evpd_test ...passed
00:03:52.280    Test: inquiry_standard_test ...passed
00:03:52.280    Test: inquiry_overflow_test ...passed
00:03:52.280    Test: task_complete_test ...passed
00:03:52.280    Test: lba_range_test ...passed
00:03:52.280    Test: xfer_len_test ...[2024-12-09 10:18:44.238305] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192
00:03:52.280  passed
00:03:52.280    Test: xfer_test ...passed
00:03:52.280    Test: scsi_name_padding_test ...passed
00:03:52.280    Test: get_dif_ctx_test ...passed
00:03:52.280    Test: unmap_split_test ...passed
00:03:52.280  
00:03:52.280  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.280                suites      1      1    n/a      0        0
00:03:52.280                 tests     14     14     14      0        0
00:03:52.280               asserts   1205   1205   1205      0      n/a
00:03:52.280  
00:03:52.280  Elapsed time =    0.000 seconds
00:03:52.280   10:18:44 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut
00:03:52.280  
00:03:52.280  
00:03:52.280       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.280       http://cunit.sourceforge.net/
00:03:52.280  
00:03:52.280  
00:03:52.280  Suite: reservation_suite
00:03:52.280    Test: test_reservation_register ...[2024-12-09 10:18:44.243021] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.280  passed
00:03:52.280    Test: test_reservation_reserve ...passed
00:03:52.281    Test: test_all_registrant_reservation_reserve ...passed
00:03:52.281    Test: test_all_registrant_reservation_access ...[2024-12-09 10:18:44.243200] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  [2024-12-09 10:18:44.243215] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1
00:03:52.281  [2024-12-09 10:18:44.243226] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match
00:03:52.281  [2024-12-09 10:18:44.243241] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  passed
00:03:52.281    Test: test_reservation_preempt_non_all_regs ...passed
00:03:52.281    Test: test_reservation_preempt_all_regs ...passed
00:03:52.281    Test: test_reservation_cmds_conflict ...passed
00:03:52.281    Test: test_scsi2_reserve_release ...passed
00:03:52.281    Test: test_pr_with_scsi2_reserve_release ...passed
00:03:52.281  
00:03:52.281  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.281                suites      1      1    n/a      0        0
00:03:52.281                 tests      9      9      9      0        0
00:03:52.281               asserts    344    344    344      0      n/a
00:03:52.281  
00:03:52.281  Elapsed time =    0.000 seconds
00:03:52.281  [2024-12-09 10:18:44.243257] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  [2024-12-09 10:18:44.243270] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0x8
00:03:52.281  [2024-12-09 10:18:44.243279] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type  reject command 0xaa
00:03:52.281  [2024-12-09 10:18:44.243292] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  [2024-12-09 10:18:44.243302] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey
00:03:52.281  [2024-12-09 10:18:44.243318] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  [2024-12-09 10:18:44.243333] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  [2024-12-09 10:18:44.243344] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type  reject command 0x2a
00:03:52.281  [2024-12-09 10:18:44.243354] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:03:52.281  [2024-12-09 10:18:44.243362] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:03:52.281  [2024-12-09 10:18:44.243370] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28
00:03:52.281  [2024-12-09 10:18:44.243378] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a
00:03:52.281  [2024-12-09 10:18:44.243397] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa
00:03:52.281  
00:03:52.281  real	0m0.028s
00:03:52.281  user	0m0.006s
00:03:52.281  sys	0m0.024s
00:03:52.281   10:18:44 unittest.unittest_scsi -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.281  ************************************
00:03:52.281  END TEST unittest_scsi
00:03:52.281  ************************************
00:03:52.281   10:18:44 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x
00:03:52.281    10:18:44 unittest -- unit/unittest.sh@255 -- # uname -s
00:03:52.281   10:18:44 unittest -- unit/unittest.sh@255 -- # '[' FreeBSD = Linux ']'
00:03:52.281   10:18:44 unittest -- unit/unittest.sh@260 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.281  ************************************
00:03:52.281  START TEST unittest_thread
00:03:52.281  ************************************
00:03:52.281   10:18:44 unittest.unittest_thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut
00:03:52.281  
00:03:52.281  
00:03:52.281       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.281       http://cunit.sourceforge.net/
00:03:52.281  
00:03:52.281  
00:03:52.281  Suite: io_channel
00:03:52.281    Test: thread_alloc ...passed
00:03:52.281    Test: thread_send_msg ...passed
00:03:52.281    Test: thread_poller ...passed
00:03:52.281    Test: poller_pause ...passed
00:03:52.281    Test: thread_for_each ...passed
00:03:52.281    Test: for_each_channel_remove ...passed
00:03:52.281    Test: for_each_channel_unreg ...passed
00:03:52.281    Test: thread_name ...[2024-12-09 10:18:44.301465] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2222:spdk_io_device_register: *ERROR*: io_device 0x8203ba3a4 already registered (old:0x3afc31262000 new:0x3afc31262180)
00:03:52.281  passed
00:03:52.281    Test: channel ...passed
00:03:52.281    Test: channel_destroy_races ...passed
00:03:52.281    Test: thread_exit_test ...[2024-12-09 10:18:44.301896] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2355:spdk_get_io_channel: *ERROR*: could not find io_device 0x23a678
00:03:52.281  [2024-12-09 10:18:44.302272] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 664:thread_exit: *ERROR*: thread 0x3afc3122cc00 got timeout, and move it to the exited state forcefully
00:03:52.281  passed
00:03:52.281    Test: thread_update_stats_test ...passed
00:03:52.281    Test: nested_channel ...passed
00:03:52.281    Test: device_unregister_and_thread_exit_race ...passed
00:03:52.281    Test: cache_closest_timed_poller ...passed
00:03:52.281    Test: multi_timed_pollers_have_same_expiration ...passed
00:03:52.281    Test: io_device_lookup ...passed
00:03:52.281    Test: spdk_spin ...[2024-12-09 10:18:44.303127] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3139:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:03:52.281  [2024-12-09 10:18:44.303139] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x8203ba3a0
00:03:52.281  [2024-12-09 10:18:44.303148] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3177:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0))
00:03:52.281  [2024-12-09 10:18:44.303264] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3140:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:03:52.281  [2024-12-09 10:18:44.303273] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x8203ba3a0
00:03:52.281  [2024-12-09 10:18:44.303280] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3160:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:03:52.281  [2024-12-09 10:18:44.303288] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x8203ba3a0
00:03:52.281  passed
00:03:52.281    Test: for_each_channel_and_thread_exit_race ...[2024-12-09 10:18:44.303296] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3160:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread)
00:03:52.281  [2024-12-09 10:18:44.303303] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x8203ba3a0
00:03:52.281  [2024-12-09 10:18:44.303312] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3121:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0))
00:03:52.281  [2024-12-09 10:18:44.303319] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x8203ba3a0
00:03:52.281  passed
00:03:52.281    Test: for_each_thread_and_thread_exit_race ...passed
00:03:52.281    Test: poller_get_name ...passed
00:03:52.281    Test: poller_get_id ...passed
00:03:52.281    Test: poller_get_state_str ...passed
00:03:52.281    Test: poller_get_period_ticks ...passed
00:03:52.281    Test: poller_get_stats ...passed
00:03:52.281  
00:03:52.281  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.281                suites      1      1    n/a      0        0
00:03:52.281                 tests     25     25     25      0        0
00:03:52.281               asserts    429    429    429      0      n/a
00:03:52.281  
00:03:52.281  Elapsed time =    0.000 seconds
00:03:52.281  
00:03:52.281  real	0m0.010s
00:03:52.281  user	0m0.009s
00:03:52.281  sys	0m0.003s
00:03:52.281   10:18:44 unittest.unittest_thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.281  ************************************
00:03:52.281  END TEST unittest_thread
00:03:52.281  ************************************
00:03:52.281   10:18:44 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x
00:03:52.281   10:18:44 unittest -- unit/unittest.sh@261 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.281   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.281  ************************************
00:03:52.281  START TEST unittest_iobuf
00:03:52.281  ************************************
00:03:52.281   10:18:44 unittest.unittest_iobuf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut
00:03:52.281  
00:03:52.281  
00:03:52.281       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.281       http://cunit.sourceforge.net/
00:03:52.281  
00:03:52.281  
00:03:52.281  Suite: io_channel
00:03:52.281    Test: iobuf ...passed
00:03:52.281    Test: iobuf_cache ...[2024-12-09 10:18:44.352385] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 417:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:03:52.281  [2024-12-09 10:18:44.352538] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 419:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:03:52.281  [2024-12-09 10:18:44.352562] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 429:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4)
00:03:52.281  [2024-12-09 10:18:44.352572] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 431:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:03:52.281  [2024-12-09 10:18:44.352584] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 417:iobuf_channel_node_populate: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4)
00:03:52.281  [2024-12-09 10:18:44.352592] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 419:iobuf_channel_node_populate: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value.
00:03:52.281  passed
00:03:52.281    Test: iobuf_priority ...passed
00:03:52.281  
00:03:52.281  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.281                suites      1      1    n/a      0        0
00:03:52.281                 tests      3      3      3      0        0
00:03:52.281               asserts    127    127    127      0      n/a
00:03:52.281  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282  
00:03:52.282  real	0m0.006s
00:03:52.282  user	0m0.004s
00:03:52.282  sys	0m0.004s
00:03:52.282   10:18:44 unittest.unittest_iobuf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.282   10:18:44 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x
00:03:52.282  ************************************
00:03:52.282  END TEST unittest_iobuf
00:03:52.282  ************************************
00:03:52.282   10:18:44 unittest -- unit/unittest.sh@262 -- # run_test unittest_util unittest_util
00:03:52.282   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.282   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.282   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.282  ************************************
00:03:52.282  START TEST unittest_util
00:03:52.282  ************************************
00:03:52.282   10:18:44 unittest.unittest_util -- common/autotest_common.sh@1129 -- # unittest_util
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: base64
00:03:52.282    Test: test_base64_get_encoded_strlen ...passed
00:03:52.282    Test: test_base64_get_decoded_len ...passed
00:03:52.282    Test: test_base64_encode ...passed
00:03:52.282    Test: test_base64_decode ...passed
00:03:52.282    Test: test_base64_urlsafe_encode ...passed
00:03:52.282    Test: test_base64_urlsafe_decode ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      6      6      6      0        0
00:03:52.282               asserts    112    112    112      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: bit_array
00:03:52.282    Test: test_1bit ...passed
00:03:52.282    Test: test_64bit ...passed
00:03:52.282    Test: test_find ...passed
00:03:52.282    Test: test_resize ...passed
00:03:52.282    Test: test_errors ...passed
00:03:52.282    Test: test_count ...passed
00:03:52.282    Test: test_mask_store_load ...passed
00:03:52.282    Test: test_mask_clear ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      8      8      8      0        0
00:03:52.282               asserts   5075   5075   5075      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: cpuset
00:03:52.282    Test: test_cpuset ...passed
00:03:52.282    Test: test_cpuset_parse ...[2024-12-09 10:18:44.411399] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '['
00:03:52.282  [2024-12-09 10:18:44.411670] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']'
00:03:52.282  [2024-12-09 10:18:44.411693] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-'
00:03:52.282  [2024-12-09 10:18:44.411711] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10)
00:03:52.282  [2024-12-09 10:18:44.411726] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ','
00:03:52.282  [2024-12-09 10:18:44.411741] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ','
00:03:52.282  [2024-12-09 10:18:44.411757] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]'
00:03:52.282  [2024-12-09 10:18:44.411773] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed
00:03:52.282  passed
00:03:52.282    Test: test_cpuset_fmt ...passed
00:03:52.282    Test: test_cpuset_foreach ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      4      4      4      0        0
00:03:52.282               asserts     90     90     90      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: crc16
00:03:52.282    Test: test_crc16_t10dif ...passed
00:03:52.282    Test: test_crc16_t10dif_seed ...passed
00:03:52.282    Test: test_crc16_t10dif_copy ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      3      3      3      0        0
00:03:52.282               asserts      5      5      5      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: crc32_ieee
00:03:52.282    Test: test_crc32_ieee ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      1      1      1      0        0
00:03:52.282               asserts      1      1      1      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: crc32c
00:03:52.282    Test: test_crc32c ...passed
00:03:52.282    Test: test_crc32c_nvme ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      2      2      2      0        0
00:03:52.282               asserts     16     16     16      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: crc64
00:03:52.282    Test: test_crc64_nvme ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      1      1      1      0        0
00:03:52.282               asserts      4      4      4      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.282   10:18:44 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut
00:03:52.282  
00:03:52.282  
00:03:52.282       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.282       http://cunit.sourceforge.net/
00:03:52.282  
00:03:52.282  
00:03:52.282  Suite: string
00:03:52.282    Test: test_parse_ip_addr ...passed
00:03:52.282    Test: test_str_chomp ...passed
00:03:52.282    Test: test_parse_capacity ...passed
00:03:52.282    Test: test_sprintf_append_realloc ...passed
00:03:52.282    Test: test_strtol ...passed
00:03:52.282    Test: test_strtoll ...passed
00:03:52.282    Test: test_strarray ...passed
00:03:52.282    Test: test_strcpy_replace ...passed
00:03:52.282  
00:03:52.282  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.282                suites      1      1    n/a      0        0
00:03:52.282                 tests      8      8      8      0        0
00:03:52.282               asserts    161    161    161      0      n/a
00:03:52.282  
00:03:52.282  Elapsed time =    0.000 seconds
00:03:52.545   10:18:44 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut
00:03:52.545  
00:03:52.545  
00:03:52.545       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.545       http://cunit.sourceforge.net/
00:03:52.545  
00:03:52.545  
00:03:52.545  Suite: dif
00:03:52.545    Test: dif_generate_and_verify_test ...[2024-12-09 10:18:44.441504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:03:52.545  [2024-12-09 10:18:44.441719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:03:52.545  [2024-12-09 10:18:44.441778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16
00:03:52.545  [2024-12-09 10:18:44.441847] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:03:52.545  [2024-12-09 10:18:44.441901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:03:52.545  passed
00:03:52.545    Test: dif_disable_check_test ...[2024-12-09 10:18:44.441956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=23, Actual=22
00:03:52.545  [2024-12-09 10:18:44.442149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:03:52.545  [2024-12-09 10:18:44.442211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:03:52.545  [2024-12-09 10:18:44.442265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22,  Expected=22, Actual=ffff
00:03:52.545  passed
00:03:52.545    Test: dif_generate_and_verify_different_pi_formats_test ...[2024-12-09 10:18:44.442466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a80000, Actual=b9848de
00:03:52.545  [2024-12-09 10:18:44.442523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b98, Actual=b0a8
00:03:52.545  [2024-12-09 10:18:44.442578] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b0a8000000000000, Actual=81039fcf5685d8d4
00:03:52.545  [2024-12-09 10:18:44.442632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12,  Expected=b9848de00000000, Actual=81039fcf5685d8d4
00:03:52.545  [2024-12-09 10:18:44.442699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:03:52.545  [2024-12-09 10:18:44.442764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:03:52.545  [2024-12-09 10:18:44.442818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:03:52.545  [2024-12-09 10:18:44.442872] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=17, Actual=0
00:03:52.545  [2024-12-09 10:18:44.442926] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:03:52.545  [2024-12-09 10:18:44.442980] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:03:52.545  [2024-12-09 10:18:44.443034] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0
00:03:52.545  passed
00:03:52.545    Test: dif_apptag_mask_test ...[2024-12-09 10:18:44.443091] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:03:52.545  [2024-12-09 10:18:44.443145] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12,  Expected=1256, Actual=1234
00:03:52.545  passed
00:03:52.545    Test: dif_sec_8_md_8_error_test ...passed
00:03:52.545    Test: dif_sec_512_md_0_error_test ...passed
00:03:52.545    Test: dif_sec_512_md_16_error_test ...passed
00:03:52.545    Test: dif_sec_4096_md_0_8_error_test ...[2024-12-09 10:18:44.443176] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 615:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:03:52.545  [2024-12-09 10:18:44.443187] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.545  [2024-12-09 10:18:44.443198] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.545  [2024-12-09 10:18:44.443206] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.545  [2024-12-09 10:18:44.443215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.545  [2024-12-09 10:18:44.443223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.545  [2024-12-09 10:18:44.443231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.545  [2024-12-09 10:18:44.443239] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.545  passed
00:03:52.545    Test: dif_sec_4100_md_128_error_test ...passed
00:03:52.545    Test: dif_guard_seed_test ...passed
00:03:52.545    Test: dif_guard_value_test ...[2024-12-09 10:18:44.443249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.545  [2024-12-09 10:18:44.443258] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.545  passed
00:03:52.545    Test: dif_disable_sec_512_md_8_single_iov_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed
00:03:52.545    Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:03:52.545    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-09 10:18:44.449222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=7d4c, Actual=fd4c
00:03:52.545  [2024-12-09 10:18:44.449452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=7e21, Actual=fe21
00:03:52.545  [2024-12-09 10:18:44.449681] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.449918] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.450147] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.545  [2024-12-09 10:18:44.450375] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.545  [2024-12-09 10:18:44.450596] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=fd4c, Actual=96
00:03:52.545  [2024-12-09 10:18:44.450758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=fe21, Actual=5c8e
00:03:52.545  [2024-12-09 10:18:44.450910] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.545  [2024-12-09 10:18:44.451137] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=b8574660, Actual=38574660
00:03:52.545  [2024-12-09 10:18:44.451357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.451583] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.451810] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.545  [2024-12-09 10:18:44.452028] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.545  [2024-12-09 10:18:44.452255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.545  [2024-12-09 10:18:44.452407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=38574660, Actual=e82a86d7
00:03:52.545  [2024-12-09 10:18:44.452560] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.545  [2024-12-09 10:18:44.452794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.545  [2024-12-09 10:18:44.453021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.453246] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.545  [2024-12-09 10:18:44.453470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.545  [2024-12-09 10:18:44.453683] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.545  [2024-12-09 10:18:44.453909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.545  [2024-12-09 10:18:44.454062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.545  passed
00:03:52.545    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-12-09 10:18:44.454110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.546  [2024-12-09 10:18:44.454139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:03:52.546  [2024-12-09 10:18:44.454166] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454222] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454278] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.546  [2024-12-09 10:18:44.454304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=5c8e
00:03:52.546  [2024-12-09 10:18:44.454330] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.546  [2024-12-09 10:18:44.454358] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=b8574660, Actual=38574660
00:03:52.546  [2024-12-09 10:18:44.454385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454496] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.546  [2024-12-09 10:18:44.454521] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=e82a86d7
00:03:52.546  [2024-12-09 10:18:44.454546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.546  [2024-12-09 10:18:44.454574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.546  [2024-12-09 10:18:44.454601] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  passed
00:03:52.546    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-12-09 10:18:44.454656] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.454684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.454711] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.546  [2024-12-09 10:18:44.454743] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.546  [2024-12-09 10:18:44.454770] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.546  [2024-12-09 10:18:44.454798] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:03:52.546  [2024-12-09 10:18:44.454826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454853] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.454881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.454937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.546  [2024-12-09 10:18:44.454962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=5c8e
00:03:52.546  [2024-12-09 10:18:44.454988] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.546  [2024-12-09 10:18:44.455015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=b8574660, Actual=38574660
00:03:52.546  [2024-12-09 10:18:44.455042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455070] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.546  [2024-12-09 10:18:44.455179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=e82a86d7
00:03:52.546  [2024-12-09 10:18:44.455204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.546  [2024-12-09 10:18:44.455232] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.546  [2024-12-09 10:18:44.455259] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.455342] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.455370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.546  [2024-12-09 10:18:44.455395] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.546  passed
00:03:52.546    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-12-09 10:18:44.455422] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.546  [2024-12-09 10:18:44.455450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:03:52.546  [2024-12-09 10:18:44.455478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455505] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.546  [2024-12-09 10:18:44.455614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=5c8e
00:03:52.546  [2024-12-09 10:18:44.455640] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.546  [2024-12-09 10:18:44.455667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=b8574660, Actual=38574660
00:03:52.546  [2024-12-09 10:18:44.455695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455722] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455750] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455777] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.455805] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.546  [2024-12-09 10:18:44.455830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=e82a86d7
00:03:52.546  [2024-12-09 10:18:44.455856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.546  [2024-12-09 10:18:44.455883] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.546  [2024-12-09 10:18:44.455911] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455938] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.455966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.455993] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.546  [2024-12-09 10:18:44.456021] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.546  passed
00:03:52.546    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-12-09 10:18:44.456046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.546  [2024-12-09 10:18:44.456073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.546  [2024-12-09 10:18:44.456101] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:03:52.546  [2024-12-09 10:18:44.456126] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.456153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.546  [2024-12-09 10:18:44.456181] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  [2024-12-09 10:18:44.456209] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.546  passed
00:03:52.547    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-12-09 10:18:44.456236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.547  [2024-12-09 10:18:44.456261] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=5c8e
00:03:52.547  [2024-12-09 10:18:44.456288] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.547  [2024-12-09 10:18:44.456313] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=b8574660, Actual=38574660
00:03:52.547  [2024-12-09 10:18:44.456341] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456404] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.456437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.456470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.547  [2024-12-09 10:18:44.456498] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=e82a86d7
00:03:52.547  [2024-12-09 10:18:44.456524] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.547  [2024-12-09 10:18:44.456552] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.547  [2024-12-09 10:18:44.456579] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456604] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.547  [2024-12-09 10:18:44.456659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.547  passed
00:03:52.547    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-12-09 10:18:44.456687] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.547  [2024-12-09 10:18:44.456712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.547  [2024-12-09 10:18:44.456739] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.547  [2024-12-09 10:18:44.456767] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7e21, Actual=fe21
00:03:52.547  [2024-12-09 10:18:44.456795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.456850] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.456877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.456905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.547  passed
00:03:52.547    Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-12-09 10:18:44.456930] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fe21, Actual=5c8e
00:03:52.547  [2024-12-09 10:18:44.456957] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.547  [2024-12-09 10:18:44.456985] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=b8574660, Actual=38574660
00:03:52.547  [2024-12-09 10:18:44.457012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.457040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.457068] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.457096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.547  [2024-12-09 10:18:44.457123] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.547  [2024-12-09 10:18:44.457149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38574660, Actual=e82a86d7
00:03:52.547  [2024-12-09 10:18:44.457174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.547  [2024-12-09 10:18:44.457202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2dc837a266, Actual=88010a2d4837a266
00:03:52.547  [2024-12-09 10:18:44.457227] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.457255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.457282] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.547  [2024-12-09 10:18:44.457310] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.547  [2024-12-09 10:18:44.457337] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.547  passed
00:03:52.547    Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...[2024-12-09 10:18:44.457363] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=88010a2d4837a266, Actual=5fb8a4ab5f5c3b03
00:03:52.547  passed
00:03:52.547    Test: dif_copy_sec_512_md_8_dif_disable_single_iov ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:03:52.547    Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:03:52.547    Test: nvme_pract_sec_4096_md_128_prchk_0_1_2_4_multi_bounce_iovs_test ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:03:52.547    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:03:52.547    Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:03:52.547    Test: dif_copy_sec_512_md_8_prchk_7_multi_bounce_iovs_complex_splits ...passed
00:03:52.547    Test: dif_copy_sec_512_md_8_dif_disable_multi_bounce_iovs_complex_splits ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:03:52.547    Test: nvme_pract_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-09 10:18:44.468203] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=7d4c, Actual=fd4c
00:03:52.547  [2024-12-09 10:18:44.468336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d218, Actual=5218
00:03:52.547  [2024-12-09 10:18:44.468462] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.468587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.468712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.547  [2024-12-09 10:18:44.468837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.547  [2024-12-09 10:18:44.468962] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=fd4c, Actual=96
00:03:52.547  [2024-12-09 10:18:44.469087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d90d, Actual=7ba2
00:03:52.547  [2024-12-09 10:18:44.469211] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.547  [2024-12-09 10:18:44.469335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a884bb0b, Actual=2884bb0b
00:03:52.547  [2024-12-09 10:18:44.469460] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.469584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.469708] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.547  [2024-12-09 10:18:44.469832] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.547  [2024-12-09 10:18:44.469956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.547  [2024-12-09 10:18:44.470080] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=890612e, Actual=d8eda199
00:03:52.547  [2024-12-09 10:18:44.470204] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.547  [2024-12-09 10:18:44.470328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=96acd229a8fb7a90, Actual=96acd22928fb7a90
00:03:52.547  [2024-12-09 10:18:44.470452] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.470576] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.547  [2024-12-09 10:18:44.470701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.547  [2024-12-09 10:18:44.470835] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.547  [2024-12-09 10:18:44.470959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.547  [2024-12-09 10:18:44.471084] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d8fbbefb69e63b38, Actual=f42107d7e8da25d
00:03:52.547  passed
00:03:52.547    Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-09 10:18:44.471119] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.547  [2024-12-09 10:18:44.471149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2603, Actual=a603
00:03:52.547  [2024-12-09 10:18:44.471178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471237] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.471266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.471295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.548  [2024-12-09 10:18:44.471324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=8fb9
00:03:52.548  [2024-12-09 10:18:44.471354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.548  [2024-12-09 10:18:44.471384] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=d7de46aa, Actual=57de46aa
00:03:52.548  [2024-12-09 10:18:44.471413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471442] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471471] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.471501] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.471530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.548  [2024-12-09 10:18:44.471559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=77ca9c8f, Actual=a7b75c38
00:03:52.548  [2024-12-09 10:18:44.471589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.548  [2024-12-09 10:18:44.471618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=763146c997c7710a, Actual=763146c917c7710a
00:03:52.548  [2024-12-09 10:18:44.471647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471677] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.471706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.548  [2024-12-09 10:18:44.471735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.548  [2024-12-09 10:18:44.471764] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.548  [2024-12-09 10:18:44.471794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=efdf849d41b1a9c7
00:03:52.548  passed
00:03:52.548    Test: dix_sec_0_md_8_error ...passed
00:03:52.548    Test: dix_sec_512_md_0_error ...passed
00:03:52.548    Test: dix_sec_512_md_16_error ...passed
00:03:52.548    Test: dix_sec_4096_md_0_8_error ...passed
00:03:52.548    Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-12-09 10:18:44.471799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 615:spdk_dif_ctx_init: *ERROR*: Zero data block size is not allowed
00:03:52.548  [2024-12-09 10:18:44.471804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.548  [2024-12-09 10:18:44.471809] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.548  [2024-12-09 10:18:44.471813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 626:spdk_dif_ctx_init: *ERROR*: Data block size should be a multiple of 4kB
00:03:52.548  [2024-12-09 10:18:44.471818] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.548  [2024-12-09 10:18:44.471822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.548  [2024-12-09 10:18:44.471826] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.548  [2024-12-09 10:18:44.471830] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 600:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size.
00:03:52.548  passed
00:03:52.548    Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed
00:03:52.548    Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed
00:03:52.548    Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed
00:03:52.548    Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed
00:03:52.548    Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed
00:03:52.548    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed
00:03:52.548    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed
00:03:52.548    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed
00:03:52.548    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-12-09 10:18:44.475768] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=7d4c, Actual=fd4c
00:03:52.548  [2024-12-09 10:18:44.475893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d218, Actual=5218
00:03:52.548  [2024-12-09 10:18:44.476018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.476142] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.476265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.548  [2024-12-09 10:18:44.476389] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.548  [2024-12-09 10:18:44.476512] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=fd4c, Actual=96
00:03:52.548  [2024-12-09 10:18:44.476635] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d90d, Actual=7ba2
00:03:52.548  [2024-12-09 10:18:44.476758] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.548  [2024-12-09 10:18:44.476881] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a884bb0b, Actual=2884bb0b
00:03:52.548  [2024-12-09 10:18:44.477004] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.477127] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.477249] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.548  [2024-12-09 10:18:44.477373] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=8000005a
00:03:52.548  [2024-12-09 10:18:44.477495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.548  [2024-12-09 10:18:44.477618] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=890612e, Actual=d8eda199
00:03:52.548  [2024-12-09 10:18:44.477741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.548  [2024-12-09 10:18:44.477864] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=96acd229a8fb7a90, Actual=96acd22928fb7a90
00:03:52.548  [2024-12-09 10:18:44.477987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.548  [2024-12-09 10:18:44.478357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=805a
00:03:52.548  [2024-12-09 10:18:44.478480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.548  [2024-12-09 10:18:44.478603] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90,  Expected=d8fbbefb69e63b38, Actual=f42107d7e8da25d
00:03:52.548  passed
00:03:52.548    Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-12-09 10:18:44.478637] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=7d4c, Actual=fd4c
00:03:52.548  [2024-12-09 10:18:44.478667] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2603, Actual=a603
00:03:52.548  [2024-12-09 10:18:44.478695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478724] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.478788] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.478816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=fd4c, Actual=96
00:03:52.548  [2024-12-09 10:18:44.478845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=2d16, Actual=8fb9
00:03:52.548  [2024-12-09 10:18:44.478874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=9ab753ed, Actual=1ab753ed
00:03:52.548  [2024-12-09 10:18:44.478903] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=4ab29a89, Actual=cab29a89
00:03:52.548  [2024-12-09 10:18:44.478932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478960] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.548  [2024-12-09 10:18:44.478989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.479017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=80000058
00:03:52.548  [2024-12-09 10:18:44.479046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=1ab753ed, Actual=a1097ba4
00:03:52.548  [2024-12-09 10:18:44.479075] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=eaa640ac, Actual=3adb801b
00:03:52.548  [2024-12-09 10:18:44.479103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7720ecc20d3, Actual=a576a7728ecc20d3
00:03:52.548  [2024-12-09 10:18:44.479132] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=763146c997c7710a, Actual=763146c917c7710a
00:03:52.548  [2024-12-09 10:18:44.479161] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.549  [2024-12-09 10:18:44.479189] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 939:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88,  Expected=88, Actual=8088
00:03:52.549  [2024-12-09 10:18:44.479218] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.549  [2024-12-09 10:18:44.479247] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 874:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=8058
00:03:52.549  [2024-12-09 10:18:44.479275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=a576a7728ecc20d3, Actual=6e5834f197ae586a
00:03:52.549  passed
00:03:52.549    Test: set_md_interleave_iovs_test ...[2024-12-09 10:18:44.479304] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 924:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88,  Expected=38662a1b56da30a2, Actual=efdf849d41b1a9c7
00:03:52.549  passed
00:03:52.549    Test: set_md_interleave_iovs_split_test ...passed
00:03:52.549    Test: dif_generate_stream_pi_16_test ...passed
00:03:52.549    Test: dif_generate_stream_test ...passed
00:03:52.549    Test: set_md_interleave_iovs_alignment_test ...passed
00:03:52.549    Test: dif_generate_split_test ...[2024-12-09 10:18:44.479863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:2193:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur.
00:03:52.549  passed
00:03:52.549    Test: set_md_interleave_iovs_multi_segments_test ...passed
00:03:52.549    Test: dif_verify_split_test ...passed
00:03:52.549    Test: dif_verify_stream_multi_segments_test ...passed
00:03:52.549    Test: update_crc32c_pi_16_test ...passed
00:03:52.549    Test: update_crc32c_test ...passed
00:03:52.549    Test: dif_update_crc32c_split_test ...passed
00:03:52.549    Test: dif_update_crc32c_stream_multi_segments_test ...passed
00:03:52.549    Test: get_range_with_md_test ...passed
00:03:52.549    Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed
00:03:52.549    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed
00:03:52.549    Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:03:52.549    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed
00:03:52.549    Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed
00:03:52.549    Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed
00:03:52.549    Test: dif_generate_and_verify_unmap_test ...passed
00:03:52.549    Test: dif_pi_format_check_test ...passed
00:03:52.549    Test: dif_type_check_test ...passed
00:03:52.549  
00:03:52.549  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.549                suites      1      1    n/a      0        0
00:03:52.549                 tests     92     92     92      0        0
00:03:52.549               asserts   3872   3872   3872      0      n/a
00:03:52.549  
00:03:52.549  Elapsed time =    0.039 seconds
00:03:52.549   10:18:44 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut
00:03:52.549  
00:03:52.549  
00:03:52.549       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.549       http://cunit.sourceforge.net/
00:03:52.549  
00:03:52.549  
00:03:52.549  Suite: iov
00:03:52.549    Test: test_single_iov ...passed
00:03:52.549    Test: test_simple_iov ...passed
00:03:52.549    Test: test_complex_iov ...passed
00:03:52.549    Test: test_iovs_to_buf ...passed
00:03:52.549    Test: test_buf_to_iovs ...passed
00:03:52.549    Test: test_memset ...passed
00:03:52.549    Test: test_iov_one ...passed
00:03:52.549    Test: test_iov_xfer ...passed
00:03:52.549  
00:03:52.549  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.549                suites      1      1    n/a      0        0
00:03:52.549                 tests      8      8      8      0        0
00:03:52.549               asserts    156    156    156      0      n/a
00:03:52.549  
00:03:52.549  Elapsed time =    0.000 seconds
00:03:52.549   10:18:44 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut
00:03:52.549  
00:03:52.549  
00:03:52.549       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.549       http://cunit.sourceforge.net/
00:03:52.549  
00:03:52.549  
00:03:52.549  Suite: math
00:03:52.549    Test: test_serial_number_arithmetic ...passed
00:03:52.549  Suite: erase
00:03:52.549    Test: test_memset_s ...passed
00:03:52.549  
00:03:52.549  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.549                suites      2      2    n/a      0        0
00:03:52.549                 tests      2      2      2      0        0
00:03:52.549               asserts     18     18     18      0      n/a
00:03:52.549  
00:03:52.549  Elapsed time =    0.000 seconds
00:03:52.549   10:18:44 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut
00:03:52.549  
00:03:52.549  
00:03:52.549       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.549       http://cunit.sourceforge.net/
00:03:52.549  
00:03:52.549  
00:03:52.549  Suite: pipe
00:03:52.549    Test: test_create_destroy ...passed
00:03:52.549    Test: test_write_get_buffer ...passed
00:03:52.549    Test: test_write_advance ...passed
00:03:52.549    Test: test_read_get_buffer ...passed
00:03:52.549    Test: test_read_advance ...passed
00:03:52.549    Test: test_data ...passed
00:03:52.549  
00:03:52.549  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.549                suites      1      1    n/a      0        0
00:03:52.549                 tests      6      6      6      0        0
00:03:52.549               asserts    251    251    251      0      n/a
00:03:52.549  
00:03:52.549  Elapsed time =    0.000 seconds
00:03:52.549    10:18:44 unittest.unittest_util -- unit/unittest.sh@146 -- # uname -s
00:03:52.549   10:18:44 unittest.unittest_util -- unit/unittest.sh@146 -- # '[' FreeBSD = Linux ']'
00:03:52.549  
00:03:52.549  real	0m0.102s
00:03:52.549  user	0m0.076s
00:03:52.549  sys	0m0.029s
00:03:52.549   10:18:44 unittest.unittest_util -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.549  ************************************
00:03:52.549  END TEST unittest_util
00:03:52.549  ************************************
00:03:52.549   10:18:44 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x
00:03:52.549   10:18:44 unittest -- unit/unittest.sh@263 -- # [[ n == y ]]
00:03:52.549   10:18:44 unittest -- unit/unittest.sh@266 -- # [[ n == y ]]
00:03:52.549   10:18:44 unittest -- unit/unittest.sh@269 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.549  ************************************
00:03:52.549  START TEST unittest_dma
00:03:52.549  ************************************
00:03:52.549   10:18:44 unittest.unittest_dma -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut
00:03:52.549  
00:03:52.549  
00:03:52.549       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.549       http://cunit.sourceforge.net/
00:03:52.549  
00:03:52.549  
00:03:52.549  Suite: dma_suite
00:03:52.549    Test: test_dma ...passed
00:03:52.549  
00:03:52.549  [2024-12-09 10:18:44.535509] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c:  60:spdk_memory_domain_create: *ERROR*: Context size can't be 0
00:03:52.549  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.549                suites      1      1    n/a      0        0
00:03:52.549                 tests      1      1      1      0        0
00:03:52.549               asserts     54     54     54      0      n/a
00:03:52.549  
00:03:52.549  Elapsed time =    0.000 seconds
00:03:52.549  
00:03:52.549  real	0m0.004s
00:03:52.549  user	0m0.000s
00:03:52.549  sys	0m0.003s
00:03:52.549   10:18:44 unittest.unittest_dma -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.549  ************************************
00:03:52.549  END TEST unittest_dma
00:03:52.549  ************************************
00:03:52.549   10:18:44 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x
00:03:52.549   10:18:44 unittest -- unit/unittest.sh@271 -- # run_test unittest_init unittest_init
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.549   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.549  ************************************
00:03:52.549  START TEST unittest_init
00:03:52.549  ************************************
00:03:52.549   10:18:44 unittest.unittest_init -- common/autotest_common.sh@1129 -- # unittest_init
00:03:52.549   10:18:44 unittest.unittest_init -- unit/unittest.sh@156 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut
00:03:52.549  
00:03:52.549  
00:03:52.549       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.549       http://cunit.sourceforge.net/
00:03:52.549  
00:03:52.549  
00:03:52.549  Suite: subsystem_suite
00:03:52.549    Test: subsystem_sort_test_depends_on_single ...passed
00:03:52.549    Test: subsystem_sort_test_depends_on_multiple ...passed
00:03:52.550    Test: subsystem_sort_test_missing_dependency ...[2024-12-09 10:18:44.565364] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing
00:03:52.550  [2024-12-09 10:18:44.565502] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing
00:03:52.550  passed
00:03:52.550  
00:03:52.550  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.550                suites      1      1    n/a      0        0
00:03:52.550                 tests      3      3      3      0        0
00:03:52.550               asserts     20     20     20      0      n/a
00:03:52.550  
00:03:52.550  Elapsed time =    0.000 seconds
00:03:52.550  
00:03:52.550  real	0m0.004s
00:03:52.550  user	0m0.003s
00:03:52.550  sys	0m0.003s
00:03:52.550   10:18:44 unittest.unittest_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.550   10:18:44 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x
00:03:52.550  ************************************
00:03:52.550  END TEST unittest_init
00:03:52.550  ************************************
00:03:52.550   10:18:44 unittest -- unit/unittest.sh@272 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:03:52.550   10:18:44 unittest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:03:52.550   10:18:44 unittest -- common/autotest_common.sh@1111 -- # xtrace_disable
00:03:52.550   10:18:44 unittest -- common/autotest_common.sh@10 -- # set +x
00:03:52.550  ************************************
00:03:52.550  START TEST unittest_keyring
00:03:52.550  ************************************
00:03:52.550   10:18:44 unittest.unittest_keyring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut
00:03:52.550  
00:03:52.550  
00:03:52.550       CUnit - A unit testing framework for C - Version 2.1-3
00:03:52.550       http://cunit.sourceforge.net/
00:03:52.550  
00:03:52.550  
00:03:52.550  Suite: keyring
00:03:52.550    Test: test_keyring_add_remove ...passed
00:03:52.550    Test: test_keyring_get_put ...passed
00:03:52.550  
00:03:52.550  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:03:52.550                suites      1      1    n/a      0        0
00:03:52.550                 tests      2      2      2      0        0
00:03:52.550               asserts     46     46     46      0      n/a
00:03:52.550  
00:03:52.550  Elapsed time =    0.000 seconds
00:03:52.550  [2024-12-09 10:18:44.602315] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists
00:03:52.550  [2024-12-09 10:18:44.602423] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists
00:03:52.550  [2024-12-09 10:18:44.602431] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 168:spdk_keyring_remove_key: *ERROR*: Key 'key0' is not owned by module 'ut2'
00:03:52.550  [2024-12-09 10:18:44.602437] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key 'key0' does not exist
00:03:52.550  [2024-12-09 10:18:44.602443] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 162:spdk_keyring_remove_key: *ERROR*: Key ':key0' does not exist
00:03:52.550  [2024-12-09 10:18:44.602448] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring
00:03:52.550  
00:03:52.550  real	0m0.003s
00:03:52.550  user	0m0.002s
00:03:52.550  sys	0m0.002s
00:03:52.550   10:18:44 unittest.unittest_keyring -- common/autotest_common.sh@1130 -- # xtrace_disable
00:03:52.550  ************************************
00:03:52.550  END TEST unittest_keyring
00:03:52.550  ************************************
00:03:52.550   10:18:44 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x
00:03:52.550   10:18:44 unittest -- unit/unittest.sh@274 -- # [[ y == y ]]
00:03:52.550    10:18:44 unittest -- unit/unittest.sh@275 -- # hostname
00:03:52.550   10:18:44 unittest -- unit/unittest.sh@275 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -d . -c --no-external -t freebsd-cloud-1725281765-2372.local -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:03:52.811  geninfo: WARNING: invalid characters removed from testname!
00:03:58.097   10:18:49 unittest -- unit/unittest.sh@276 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info
00:04:03.389   10:18:55 unittest -- unit/unittest.sh@277 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_total.info -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:05.959   10:18:57 unittest -- unit/unittest.sh@278 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/app/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:09.250   10:19:00 unittest -- unit/unittest.sh@279 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:11.796   10:19:03 unittest -- unit/unittest.sh@280 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/examples/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:15.097   10:19:06 unittest -- unit/unittest.sh@281 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info '/home/vagrant/spdk_repo/spdk/test/*' -o /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:17.657   10:19:09 unittest -- unit/unittest.sh@282 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_base.info /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_test.info
00:04:17.657   10:19:09 unittest -- unit/unittest.sh@283 -- # genhtml /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info --output-directory /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:04:18.603  Reading data file /home/vagrant/spdk_repo/spdk/../output/ut_coverage/ut_cov_unit.info
00:04:18.603  Found 231 entries.
00:04:18.603  Found common filename prefix "/home/vagrant/spdk_repo/spdk"
00:04:18.603  Writing .css and .png files.
00:04:18.603  Generating output.
00:04:19.176  Processing file include/spdk/thread.h
00:04:19.176  Processing file include/spdk/nvme.h
00:04:19.176  Processing file include/spdk/util.h
00:04:19.176  Processing file include/spdk/nvmf_transport.h
00:04:19.176  Processing file include/spdk/nvme_spec.h
00:04:19.176  Processing file include/spdk/trace.h
00:04:19.176  Processing file include/spdk/histogram_data.h
00:04:19.176  Processing file include/spdk/endian.h
00:04:19.176  Processing file include/spdk/mmio.h
00:04:19.176  Processing file include/spdk/bdev_module.h
00:04:19.176  Processing file include/spdk/base64.h
00:04:19.439  Processing file include/spdk_internal/rdma_utils.h
00:04:19.439  Processing file include/spdk_internal/sgl.h
00:04:19.439  Processing file include/spdk_internal/nvme_tcp.h
00:04:19.439  Processing file include/spdk_internal/utf.h
00:04:19.439  Processing file include/spdk_internal/sock.h
00:04:19.700  Processing file lib/accel/accel_rpc.c
00:04:19.700  Processing file lib/accel/accel_sw.c
00:04:19.700  Processing file lib/accel/accel.c
00:04:20.271  Processing file lib/bdev/bdev_zone.c
00:04:20.271  Processing file lib/bdev/bdev.c
00:04:20.271  Processing file lib/bdev/bdev_rpc.c
00:04:20.271  Processing file lib/bdev/part.c
00:04:20.271  Processing file lib/bdev/scsi_nvme.c
00:04:20.533  Processing file lib/blob/blob_bs_dev.c
00:04:20.533  Processing file lib/blob/request.c
00:04:20.533  Processing file lib/blob/zeroes.c
00:04:20.533  Processing file lib/blob/blobstore.h
00:04:20.533  Processing file lib/blob/blobstore.c
00:04:20.794  Processing file lib/blobfs/blobfs.c
00:04:20.794  Processing file lib/blobfs/tree.c
00:04:20.794  Processing file lib/conf/conf.c
00:04:21.056  Processing file lib/dma/dma.c
00:04:21.631  Processing file lib/env_dpdk/pci_dpdk_2207.c
00:04:21.631  Processing file lib/env_dpdk/pci_ioat.c
00:04:21.631  Processing file lib/env_dpdk/pci_dpdk_2211.c
00:04:21.631  Processing file lib/env_dpdk/pci_idxd.c
00:04:21.631  Processing file lib/env_dpdk/memory.c
00:04:21.631  Processing file lib/env_dpdk/threads.c
00:04:21.631  Processing file lib/env_dpdk/env.c
00:04:21.631  Processing file lib/env_dpdk/pci_event.c
00:04:21.631  Processing file lib/env_dpdk/pci_dpdk.c
00:04:21.631  Processing file lib/env_dpdk/pci_virtio.c
00:04:21.631  Processing file lib/env_dpdk/init.c
00:04:21.631  Processing file lib/env_dpdk/sigbus_handler.c
00:04:21.631  Processing file lib/env_dpdk/pci.c
00:04:21.631  Processing file lib/env_dpdk/pci_vmd.c
00:04:21.894  Processing file lib/event/log_rpc.c
00:04:21.894  Processing file lib/event/reactor.c
00:04:21.894  Processing file lib/event/scheduler_static.c
00:04:21.894  Processing file lib/event/app_rpc.c
00:04:21.894  Processing file lib/event/app.c
00:04:22.156  Processing file lib/idxd/idxd.c
00:04:22.156  Processing file lib/idxd/idxd_user.c
00:04:22.156  Processing file lib/idxd/idxd_internal.h
00:04:22.417  Processing file lib/init/json_config.c
00:04:22.417  Processing file lib/init/rpc.c
00:04:22.417  Processing file lib/init/subsystem.c
00:04:22.417  Processing file lib/init/subsystem_rpc.c
00:04:22.678  Processing file lib/ioat/ioat.c
00:04:22.678  Processing file lib/ioat/ioat_internal.h
00:04:23.622  Processing file lib/iscsi/iscsi.c
00:04:23.622  Processing file lib/iscsi/iscsi.h
00:04:23.622  Processing file lib/iscsi/portal_grp.c
00:04:23.622  Processing file lib/iscsi/param.c
00:04:23.622  Processing file lib/iscsi/task.c
00:04:23.622  Processing file lib/iscsi/init_grp.c
00:04:23.622  Processing file lib/iscsi/conn.c
00:04:23.622  Processing file lib/iscsi/tgt_node.c
00:04:23.622  Processing file lib/iscsi/iscsi_rpc.c
00:04:23.622  Processing file lib/iscsi/iscsi_subsystem.c
00:04:23.622  Processing file lib/iscsi/task.h
00:04:23.622  Processing file lib/json/json_write.c
00:04:23.622  Processing file lib/json/json_util.c
00:04:23.622  Processing file lib/json/json_parse.c
00:04:23.884  Processing file lib/jsonrpc/jsonrpc_client.c
00:04:23.884  Processing file lib/jsonrpc/jsonrpc_server_tcp.c
00:04:23.884  Processing file lib/jsonrpc/jsonrpc_server.c
00:04:23.884  Processing file lib/jsonrpc/jsonrpc_client_tcp.c
00:04:23.884  Processing file lib/keyring/keyring.c
00:04:23.884  Processing file lib/keyring/keyring_rpc.c
00:04:24.147  Processing file lib/log/log.c
00:04:24.147  Processing file lib/log/log_flags.c
00:04:24.147  Processing file lib/log/log_deprecated.c
00:04:24.408  Processing file lib/lvol/lvol.c
00:04:24.409  Processing file lib/notify/notify.c
00:04:24.409  Processing file lib/notify/notify_rpc.c
00:04:25.794  Processing file lib/nvme/nvme.c
00:04:25.794  Processing file lib/nvme/nvme_ctrlr_ocssd_cmd.c
00:04:25.794  Processing file lib/nvme/nvme_pcie_common.c
00:04:25.794  Processing file lib/nvme/nvme_quirks.c
00:04:25.794  Processing file lib/nvme/nvme_ctrlr.c
00:04:25.794  Processing file lib/nvme/nvme_pcie.c
00:04:25.794  Processing file lib/nvme/nvme_ns_cmd.c
00:04:25.794  Processing file lib/nvme/nvme_discovery.c
00:04:25.794  Processing file lib/nvme/nvme_rdma.c
00:04:25.794  Processing file lib/nvme/nvme_ns_ocssd_cmd.c
00:04:25.794  Processing file lib/nvme/nvme_qpair.c
00:04:25.794  Processing file lib/nvme/nvme_transport.c
00:04:25.794  Processing file lib/nvme/nvme_opal.c
00:04:25.794  Processing file lib/nvme/nvme_io_msg.c
00:04:25.794  Processing file lib/nvme/nvme_zns.c
00:04:25.794  Processing file lib/nvme/nvme_stubs.c
00:04:25.794  Processing file lib/nvme/nvme_pcie_internal.h
00:04:25.794  Processing file lib/nvme/nvme_auth.c
00:04:25.794  Processing file lib/nvme/nvme_ns.c
00:04:25.794  Processing file lib/nvme/nvme_poll_group.c
00:04:25.794  Processing file lib/nvme/nvme_tcp.c
00:04:25.794  Processing file lib/nvme/nvme_internal.h
00:04:25.794  Processing file lib/nvme/nvme_fabric.c
00:04:25.794  Processing file lib/nvme/nvme_ctrlr_cmd.c
00:04:27.180  Processing file lib/nvmf/transport.c
00:04:27.180  Processing file lib/nvmf/ctrlr_discovery.c
00:04:27.180  Processing file lib/nvmf/subsystem.c
00:04:27.180  Processing file lib/nvmf/ctrlr_bdev.c
00:04:27.180  Processing file lib/nvmf/nvmf.c
00:04:27.180  Processing file lib/nvmf/rdma.c
00:04:27.180  Processing file lib/nvmf/tcp.c
00:04:27.180  Processing file lib/nvmf/nvmf_internal.h
00:04:27.180  Processing file lib/nvmf/stubs.c
00:04:27.180  Processing file lib/nvmf/ctrlr.c
00:04:27.180  Processing file lib/nvmf/nvmf_rpc.c
00:04:27.180  Processing file lib/nvmf/auth.c
00:04:27.180  Processing file lib/rdma_provider/rdma_provider_verbs.c
00:04:27.180  Processing file lib/rdma_provider/common.c
00:04:27.180  Processing file lib/rdma_utils/rdma_utils.c
00:04:27.180  Processing file lib/rpc/rpc.c
00:04:27.752  Processing file lib/scsi/scsi.c
00:04:27.752  Processing file lib/scsi/lun.c
00:04:27.752  Processing file lib/scsi/scsi_bdev.c
00:04:27.753  Processing file lib/scsi/task.c
00:04:27.753  Processing file lib/scsi/scsi_rpc.c
00:04:27.753  Processing file lib/scsi/dev.c
00:04:27.753  Processing file lib/scsi/scsi_pr.c
00:04:27.753  Processing file lib/scsi/port.c
00:04:28.014  Processing file lib/sock/sock.c
00:04:28.014  Processing file lib/sock/sock_rpc.c
00:04:28.014  Processing file lib/thread/iobuf.c
00:04:28.014  Processing file lib/thread/thread.c
00:04:28.274  Processing file lib/trace/trace_rpc.c
00:04:28.274  Processing file lib/trace/trace.c
00:04:28.274  Processing file lib/trace/trace_flags.c
00:04:28.534  Processing file lib/trace_parser/trace.cpp
00:04:28.534  Processing file lib/ut/ut.c
00:04:28.534  Processing file lib/ut_mock/mock.c
00:04:29.922  Processing file lib/util/base64.c
00:04:29.922  Processing file lib/util/fd_group.c
00:04:29.922  Processing file lib/util/crc16.c
00:04:29.922  Processing file lib/util/xor.c
00:04:29.922  Processing file lib/util/math.c
00:04:29.922  Processing file lib/util/crc32c.c
00:04:29.922  Processing file lib/util/md5.c
00:04:29.922  Processing file lib/util/bit_array.c
00:04:29.922  Processing file lib/util/uuid.c
00:04:29.922  Processing file lib/util/crc32.c
00:04:29.922  Processing file lib/util/strerror_tls.c
00:04:29.922  Processing file lib/util/string.c
00:04:29.922  Processing file lib/util/crc32_ieee.c
00:04:29.922  Processing file lib/util/dif.c
00:04:29.922  Processing file lib/util/fd.c
00:04:29.922  Processing file lib/util/iov.c
00:04:29.922  Processing file lib/util/file.c
00:04:29.922  Processing file lib/util/pipe.c
00:04:29.922  Processing file lib/util/cpuset.c
00:04:29.922  Processing file lib/util/crc64.c
00:04:29.922  Processing file lib/util/hexlify.c
00:04:29.922  Processing file lib/util/zipf.c
00:04:29.922  Processing file lib/util/net.c
00:04:29.922  Processing file lib/vmd/led.c
00:04:29.922  Processing file lib/vmd/vmd.c
00:04:30.184  Processing file module/accel/dsa/accel_dsa.c
00:04:30.184  Processing file module/accel/dsa/accel_dsa_rpc.c
00:04:30.184  Processing file module/accel/error/accel_error.c
00:04:30.184  Processing file module/accel/error/accel_error_rpc.c
00:04:30.463  Processing file module/accel/iaa/accel_iaa.c
00:04:30.463  Processing file module/accel/iaa/accel_iaa_rpc.c
00:04:30.463  Processing file module/accel/ioat/accel_ioat.c
00:04:30.463  Processing file module/accel/ioat/accel_ioat_rpc.c
00:04:30.732  Processing file module/bdev/aio/bdev_aio_rpc.c
00:04:30.732  Processing file module/bdev/aio/bdev_aio.c
00:04:30.992  Processing file module/bdev/delay/vbdev_delay_rpc.c
00:04:30.992  Processing file module/bdev/delay/vbdev_delay.c
00:04:30.992  Processing file module/bdev/error/vbdev_error.c
00:04:30.992  Processing file module/bdev/error/vbdev_error_rpc.c
00:04:31.252  Processing file module/bdev/gpt/gpt.c
00:04:31.252  Processing file module/bdev/gpt/vbdev_gpt.c
00:04:31.252  Processing file module/bdev/gpt/gpt.h
00:04:31.512  Processing file module/bdev/lvol/vbdev_lvol.c
00:04:31.512  Processing file module/bdev/lvol/vbdev_lvol_rpc.c
00:04:31.774  Processing file module/bdev/malloc/bdev_malloc_rpc.c
00:04:31.774  Processing file module/bdev/malloc/bdev_malloc.c
00:04:31.774  Processing file module/bdev/null/bdev_null_rpc.c
00:04:31.774  Processing file module/bdev/null/bdev_null.c
00:04:32.347  Processing file module/bdev/nvme/bdev_nvme_rpc.c
00:04:32.347  Processing file module/bdev/nvme/nvme_rpc.c
00:04:32.347  Processing file module/bdev/nvme/bdev_nvme.c
00:04:32.347  Processing file module/bdev/nvme/bdev_mdns_client.c
00:04:32.347  Processing file module/bdev/passthru/vbdev_passthru.c
00:04:32.347  Processing file module/bdev/passthru/vbdev_passthru_rpc.c
00:04:32.916  Processing file module/bdev/raid/bdev_raid_sb.c
00:04:32.916  Processing file module/bdev/raid/concat.c
00:04:32.916  Processing file module/bdev/raid/bdev_raid.h
00:04:32.916  Processing file module/bdev/raid/bdev_raid_rpc.c
00:04:32.916  Processing file module/bdev/raid/raid1.c
00:04:32.916  Processing file module/bdev/raid/bdev_raid.c
00:04:32.916  Processing file module/bdev/raid/raid0.c
00:04:33.175  Processing file module/bdev/split/vbdev_split_rpc.c
00:04:33.175  Processing file module/bdev/split/vbdev_split.c
00:04:33.175  Processing file module/bdev/zone_block/vbdev_zone_block_rpc.c
00:04:33.175  Processing file module/bdev/zone_block/vbdev_zone_block.c
00:04:33.434  Processing file module/blob/bdev/blob_bdev.c
00:04:33.434  Processing file module/blobfs/bdev/blobfs_bdev.c
00:04:33.434  Processing file module/blobfs/bdev/blobfs_bdev_rpc.c
00:04:33.694  Processing file module/env_dpdk/env_dpdk_rpc.c
00:04:33.694  Processing file module/event/subsystems/accel/accel.c
00:04:33.694  Processing file module/event/subsystems/bdev/bdev.c
00:04:33.953  Processing file module/event/subsystems/iobuf/iobuf.c
00:04:33.953  Processing file module/event/subsystems/iobuf/iobuf_rpc.c
00:04:33.953  Processing file module/event/subsystems/iscsi/iscsi.c
00:04:34.214  Processing file module/event/subsystems/keyring/keyring.c
00:04:34.214  Processing file module/event/subsystems/nvmf/nvmf_tgt.c
00:04:34.214  Processing file module/event/subsystems/nvmf/nvmf_rpc.c
00:04:34.474  Processing file module/event/subsystems/scheduler/scheduler.c
00:04:34.474  Processing file module/event/subsystems/scsi/scsi.c
00:04:34.474  Processing file module/event/subsystems/sock/sock.c
00:04:34.736  Processing file module/event/subsystems/vmd/vmd_rpc.c
00:04:34.736  Processing file module/event/subsystems/vmd/vmd.c
00:04:34.998  Processing file module/keyring/file/keyring_rpc.c
00:04:34.998  Processing file module/keyring/file/keyring.c
00:04:34.998  Processing file module/scheduler/dynamic/scheduler_dynamic.c
00:04:35.258  Processing file module/sock/posix/posix.c
00:04:35.259  Writing directory view page.
00:04:35.259  Overall coverage rate:
00:04:35.259    lines......: 43.7% (41940 of 96020 lines)
00:04:35.259    functions..: 48.2% (3463 of 7183 functions)
00:04:35.259  Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage
00:04:35.259   10:19:27 unittest -- unit/unittest.sh@284 -- # echo 'Note: coverage report is here: /home/vagrant/spdk_repo/spdk/../output/ut_coverage'
00:04:35.259   10:19:27 unittest -- unit/unittest.sh@287 -- # set +x
00:04:35.259  
00:04:35.259  
00:04:35.259  =====================
00:04:35.259  All unit tests passed
00:04:35.259  =====================
00:04:35.259  WARN: neither valgrind nor ASAN is enabled!
00:04:35.259  
00:04:35.259  
00:04:35.259  
00:04:35.259  real	1m3.366s
00:04:35.259  user	0m36.800s
00:04:35.259  sys	0m5.293s
00:04:35.259  ************************************
00:04:35.259  END TEST unittest
00:04:35.259  ************************************
00:04:35.259   10:19:27 unittest -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.259   10:19:27 unittest -- common/autotest_common.sh@10 -- # set +x
00:04:35.259   10:19:27  -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']'
00:04:35.259   10:19:27  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:35.259   10:19:27  -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]]
00:04:35.259   10:19:27  -- spdk/autotest.sh@149 -- # timing_enter lib
00:04:35.259   10:19:27  -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:35.259   10:19:27  -- common/autotest_common.sh@10 -- # set +x
00:04:35.259   10:19:27  -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]]
00:04:35.259   10:19:27  -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:35.259   10:19:27  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.259   10:19:27  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.259   10:19:27  -- common/autotest_common.sh@10 -- # set +x
00:04:35.259  ************************************
00:04:35.259  START TEST env
00:04:35.259  ************************************
00:04:35.259   10:19:27 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh
00:04:35.519  * Looking for test storage...
00:04:35.519  * Found test storage at /home/vagrant/spdk_repo/spdk/test/env
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:35.519     10:19:27 env -- common/autotest_common.sh@1711 -- # lcov --version
00:04:35.519     10:19:27 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:35.519    10:19:27 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:35.519    10:19:27 env -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:35.519    10:19:27 env -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:35.519    10:19:27 env -- scripts/common.sh@336 -- # IFS=.-:
00:04:35.519    10:19:27 env -- scripts/common.sh@336 -- # read -ra ver1
00:04:35.519    10:19:27 env -- scripts/common.sh@337 -- # IFS=.-:
00:04:35.519    10:19:27 env -- scripts/common.sh@337 -- # read -ra ver2
00:04:35.519    10:19:27 env -- scripts/common.sh@338 -- # local 'op=<'
00:04:35.519    10:19:27 env -- scripts/common.sh@340 -- # ver1_l=2
00:04:35.519    10:19:27 env -- scripts/common.sh@341 -- # ver2_l=1
00:04:35.519    10:19:27 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:35.519    10:19:27 env -- scripts/common.sh@344 -- # case "$op" in
00:04:35.519    10:19:27 env -- scripts/common.sh@345 -- # : 1
00:04:35.519    10:19:27 env -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:35.519    10:19:27 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:35.519     10:19:27 env -- scripts/common.sh@365 -- # decimal 1
00:04:35.519     10:19:27 env -- scripts/common.sh@353 -- # local d=1
00:04:35.519     10:19:27 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:35.519     10:19:27 env -- scripts/common.sh@355 -- # echo 1
00:04:35.519    10:19:27 env -- scripts/common.sh@365 -- # ver1[v]=1
00:04:35.519     10:19:27 env -- scripts/common.sh@366 -- # decimal 2
00:04:35.519     10:19:27 env -- scripts/common.sh@353 -- # local d=2
00:04:35.519     10:19:27 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:35.519     10:19:27 env -- scripts/common.sh@355 -- # echo 2
00:04:35.519    10:19:27 env -- scripts/common.sh@366 -- # ver2[v]=2
00:04:35.519    10:19:27 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:35.519    10:19:27 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:35.519    10:19:27 env -- scripts/common.sh@368 -- # return 0
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:35.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.519  		--rc genhtml_branch_coverage=1
00:04:35.519  		--rc genhtml_function_coverage=1
00:04:35.519  		--rc genhtml_legend=1
00:04:35.519  		--rc geninfo_all_blocks=1
00:04:35.519  		--rc geninfo_unexecuted_blocks=1
00:04:35.519  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:35.519  		'
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:35.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.519  		--rc genhtml_branch_coverage=1
00:04:35.519  		--rc genhtml_function_coverage=1
00:04:35.519  		--rc genhtml_legend=1
00:04:35.519  		--rc geninfo_all_blocks=1
00:04:35.519  		--rc geninfo_unexecuted_blocks=1
00:04:35.519  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:35.519  		'
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:35.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.519  		--rc genhtml_branch_coverage=1
00:04:35.519  		--rc genhtml_function_coverage=1
00:04:35.519  		--rc genhtml_legend=1
00:04:35.519  		--rc geninfo_all_blocks=1
00:04:35.519  		--rc geninfo_unexecuted_blocks=1
00:04:35.519  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:35.519  		'
00:04:35.519    10:19:27 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:35.519  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:35.519  		--rc genhtml_branch_coverage=1
00:04:35.519  		--rc genhtml_function_coverage=1
00:04:35.519  		--rc genhtml_legend=1
00:04:35.519  		--rc geninfo_all_blocks=1
00:04:35.519  		--rc geninfo_unexecuted_blocks=1
00:04:35.519  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:35.519  		'
00:04:35.519   10:19:27 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:35.519   10:19:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.519   10:19:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.519   10:19:27 env -- common/autotest_common.sh@10 -- # set +x
00:04:35.519  ************************************
00:04:35.519  START TEST env_memory
00:04:35.519  ************************************
00:04:35.519   10:19:27 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut
00:04:35.519  
00:04:35.519  
00:04:35.519       CUnit - A unit testing framework for C - Version 2.1-3
00:04:35.519       http://cunit.sourceforge.net/
00:04:35.519  
00:04:35.519  
00:04:35.519  Suite: memory
00:04:35.519    Test: alloc and free memory map ...[2024-12-09 10:19:27.522213] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed
00:04:35.519  passed
00:04:35.519    Test: mem map translation ...[2024-12-09 10:19:27.530743] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234
00:04:35.519  [2024-12-09 10:19:27.530790] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 596:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152
00:04:35.519  [2024-12-09 10:19:27.530810] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656
00:04:35.519  [2024-12-09 10:19:27.530821] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map
00:04:35.519  passed
00:04:35.519    Test: mem map registration ...[2024-12-09 10:19:27.542225] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234
00:04:35.519  [2024-12-09 10:19:27.542270] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 348:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152
00:04:35.519  passed
00:04:35.519    Test: mem map adjacent registrations ...passed
00:04:35.519  
00:04:35.519  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:35.519                suites      1      1    n/a      0        0
00:04:35.519                 tests      4      4      4      0        0
00:04:35.519               asserts    152    152    152      0      n/a
00:04:35.519  
00:04:35.519  Elapsed time =    0.047 seconds
00:04:35.519  
00:04:35.519  real	0m0.049s
00:04:35.519  user	0m0.048s
00:04:35.519  sys	0m0.006s
00:04:35.519   10:19:27 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:35.520  ************************************
00:04:35.520  END TEST env_memory
00:04:35.520  ************************************
00:04:35.520   10:19:27 env.env_memory -- common/autotest_common.sh@10 -- # set +x
00:04:35.520   10:19:27 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:35.520   10:19:27 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:35.520   10:19:27 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:35.520   10:19:27 env -- common/autotest_common.sh@10 -- # set +x
00:04:35.520  ************************************
00:04:35.520  START TEST env_vtophys
00:04:35.520  ************************************
00:04:35.520   10:19:27 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys
00:04:35.520  EAL: lib.eal log level changed from notice to debug
00:04:35.520  EAL: Sysctl reports 10 cpus
00:04:35.520  EAL: Detected lcore 0 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 1 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 2 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 3 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 4 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 5 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 6 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 7 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 8 as core 0 on socket 0
00:04:35.520  EAL: Detected lcore 9 as core 0 on socket 0
00:04:35.520  EAL: Maximum logical cores by configuration: 128
00:04:35.520  EAL: Detected CPU lcores: 10
00:04:35.520  EAL: Detected NUMA nodes: 1
00:04:35.520  EAL: Checking presence of .so 'librte_eal.so.24.1'
00:04:35.520  EAL: Checking presence of .so 'librte_eal.so.24'
00:04:35.520  EAL: Checking presence of .so 'librte_eal.so'
00:04:35.520  EAL: Detected static linkage of DPDK
00:04:35.520  EAL: No shared files mode enabled, IPC will be disabled
00:04:35.520  EAL: PCI scan found 10 devices
00:04:35.520  EAL: Specific IOVA mode is not requested, autodetecting
00:04:35.520  EAL: Selecting IOVA mode according to bus requests
00:04:35.520  EAL: Bus pci wants IOVA as 'PA'
00:04:35.520  EAL: Selected IOVA mode 'PA'
00:04:35.520  EAL: Contigmem driver has 8 buffers, each of size 256MB
00:04:35.520  EAL: Ask a virtual area of 0x2e000 bytes
00:04:35.520  EAL: WARNING! Base virtual address hint (0x1000005000 != 0x10006bd000) not respected!
00:04:35.520  EAL:    This may cause issues with mapping memory into secondary processes
00:04:35.520  EAL: Virtual area found at 0x10006bd000 (size = 0x2e000)
00:04:35.520  EAL: Setting up physically contiguous memory...
00:04:35.520  EAL: Ask a virtual area of 0x1000 bytes
00:04:35.520  EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000f57000) not respected!
00:04:35.520  EAL:    This may cause issues with mapping memory into secondary processes
00:04:35.520  EAL: Virtual area found at 0x1000f57000 (size = 0x1000)
00:04:35.520  EAL: Memseg list allocated at socket 0, page size 0x40000kB
00:04:35.520  EAL: Ask a virtual area of 0xf0000000 bytes
00:04:35.520  EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected!
00:04:35.520  EAL:    This may cause issues with mapping memory into secondary processes
00:04:35.520  EAL: Virtual area found at 0x1060000000 (size = 0xf0000000)
00:04:35.520  EAL: VA reserved for memseg list at 0x1060000000, size f0000000
00:04:35.520  EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x140000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x160000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x170000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x180000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x190000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1a0000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1b0000000, len 268435456
00:04:35.780  EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x1c0000000, len 268435456
00:04:35.780  EAL: No shared files mode enabled, IPC is disabled
00:04:35.780  EAL: Added 2048M to heap on socket 0
00:04:35.780  EAL: TSC is not safe to use in SMP mode
00:04:35.780  EAL: TSC is not invariant
00:04:35.780  EAL: TSC frequency is ~2599999 KHz
00:04:35.780  EAL: Main lcore 0 is ready (tid=39919aa12000;cpuset=[0])
00:04:35.780  EAL: PCI scan found 10 devices
00:04:35.780  EAL: Registering mem event callbacks not supported
00:04:36.039  
00:04:36.039  
00:04:36.039       CUnit - A unit testing framework for C - Version 2.1-3
00:04:36.039       http://cunit.sourceforge.net/
00:04:36.039  
00:04:36.039  
00:04:36.039  Suite: components_suite
00:04:36.039    Test: vtophys_malloc_test ...passed
00:04:36.039    Test: vtophys_spdk_malloc_test ...passed
00:04:36.039  
00:04:36.039  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:36.039                suites      1      1    n/a      0        0
00:04:36.039                 tests      2      2      2      0        0
00:04:36.039               asserts    497    497    497      0      n/a
00:04:36.039  
00:04:36.039  Elapsed time =    0.180 seconds
00:04:36.039  
00:04:36.039  real	0m0.510s
00:04:36.039  user	0m0.180s
00:04:36.039  sys	0m0.327s
00:04:36.039   10:19:28 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.039   10:19:28 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x
00:04:36.039  ************************************
00:04:36.039  END TEST env_vtophys
00:04:36.039  ************************************
00:04:36.039   10:19:28 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:36.039   10:19:28 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:36.039   10:19:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:36.039   10:19:28 env -- common/autotest_common.sh@10 -- # set +x
00:04:36.300  ************************************
00:04:36.300  START TEST env_pci
00:04:36.300  ************************************
00:04:36.300   10:19:28 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut
00:04:36.300  
00:04:36.300  
00:04:36.300       CUnit - A unit testing framework for C - Version 2.1-3
00:04:36.300       http://cunit.sourceforge.net/
00:04:36.300  
00:04:36.300  
00:04:36.300  Suite: pci
00:04:36.300    Test: pci_hook ...passed
00:04:36.300  
00:04:36.300  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:04:36.300                suites      1      1    n/a      0        0
00:04:36.300                 tests      1      1      1      0        0
00:04:36.300               asserts     25     25     25      0      n/a
00:04:36.300  
00:04:36.300  Elapsed time =    0.000 seconds
00:04:36.300  EAL: Cannot find device (10000:00:01.0)
00:04:36.300  EAL: Failed to attach device on primary process
00:04:36.300  
00:04:36.300  real	0m0.009s
00:04:36.300  user	0m0.008s
00:04:36.300  sys	0m0.000s
00:04:36.300   10:19:28 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.300  ************************************
00:04:36.300  END TEST env_pci
00:04:36.300  ************************************
00:04:36.300   10:19:28 env.env_pci -- common/autotest_common.sh@10 -- # set +x
00:04:36.300   10:19:28 env -- env/env.sh@14 -- # argv='-c 0x1 '
00:04:36.300    10:19:28 env -- env/env.sh@15 -- # uname
00:04:36.300   10:19:28 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']'
00:04:36.300   10:19:28 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1
00:04:36.300   10:19:28 env -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:04:36.300   10:19:28 env -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:36.300   10:19:28 env -- common/autotest_common.sh@10 -- # set +x
00:04:36.300  ************************************
00:04:36.300  START TEST env_dpdk_post_init
00:04:36.300  ************************************
00:04:36.300   10:19:28 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1
00:04:36.300  EAL: Sysctl reports 10 cpus
00:04:36.300  EAL: Detected CPU lcores: 10
00:04:36.300  EAL: Detected NUMA nodes: 1
00:04:36.300  EAL: Detected static linkage of DPDK
00:04:36.300  EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
00:04:36.300  EAL: Selected IOVA mode 'PA'
00:04:36.300  EAL: Contigmem driver has 8 buffers, each of size 256MB
00:04:36.300  EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x140000000, len 268435456
00:04:36.300  EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x160000000, len 268435456
00:04:36.300  EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x170000000, len 268435456
00:04:36.564  EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x180000000, len 268435456
00:04:36.564  EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x190000000, len 268435456
00:04:36.564  EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x1a0000000, len 268435456
00:04:36.564  EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x1b0000000, len 268435456
00:04:36.564  EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x1c0000000, len 268435456
00:04:36.564  EAL: TSC is not safe to use in SMP mode
00:04:36.564  EAL: TSC is not invariant
00:04:36.564  TELEMETRY: No legacy callbacks, legacy socket not created
00:04:36.564  [2024-12-09 10:19:28.678567] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:04:36.564  EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1)
00:04:36.564  Starting DPDK initialization...
00:04:36.564  Starting SPDK post initialization...
00:04:36.564  SPDK NVMe probe
00:04:36.564  Attaching to 0000:00:10.0
00:04:36.564  Attached to 0000:00:10.0
00:04:36.564  Cleaning up...
00:04:36.825  
00:04:36.825  real	0m0.429s
00:04:36.825  user	0m0.012s
00:04:36.825  sys	0m0.390s
00:04:36.825   10:19:28 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.825   10:19:28 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x
00:04:36.825  ************************************
00:04:36.825  END TEST env_dpdk_post_init
00:04:36.825  ************************************
00:04:36.825    10:19:28 env -- env/env.sh@26 -- # uname
00:04:36.825   10:19:28 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']'
00:04:36.825  
00:04:36.825  real	0m1.502s
00:04:36.825  user	0m0.436s
00:04:36.825  sys	0m0.926s
00:04:36.825   10:19:28 env -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:36.825   10:19:28 env -- common/autotest_common.sh@10 -- # set +x
00:04:36.825  ************************************
00:04:36.825  END TEST env
00:04:36.825  ************************************
00:04:36.825   10:19:28  -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:36.825   10:19:28  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:36.825   10:19:28  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:36.825   10:19:28  -- common/autotest_common.sh@10 -- # set +x
00:04:36.825  ************************************
00:04:36.825  START TEST rpc
00:04:36.825  ************************************
00:04:36.825   10:19:28 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh
00:04:37.086  * Looking for test storage...
00:04:37.086  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:04:37.086    10:19:28 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:37.086     10:19:28 rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:37.086     10:19:28 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:37.086    10:19:29 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:37.086    10:19:29 rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:37.086    10:19:29 rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:37.086    10:19:29 rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:37.086    10:19:29 rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:37.086    10:19:29 rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:37.086    10:19:29 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:37.086    10:19:29 rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:37.086    10:19:29 rpc -- scripts/common.sh@345 -- # : 1
00:04:37.086    10:19:29 rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:37.086    10:19:29 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:37.086     10:19:29 rpc -- scripts/common.sh@365 -- # decimal 1
00:04:37.086     10:19:29 rpc -- scripts/common.sh@353 -- # local d=1
00:04:37.086     10:19:29 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:37.086     10:19:29 rpc -- scripts/common.sh@355 -- # echo 1
00:04:37.086    10:19:29 rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:37.086     10:19:29 rpc -- scripts/common.sh@366 -- # decimal 2
00:04:37.086     10:19:29 rpc -- scripts/common.sh@353 -- # local d=2
00:04:37.086     10:19:29 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:37.086     10:19:29 rpc -- scripts/common.sh@355 -- # echo 2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:37.086    10:19:29 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:37.086    10:19:29 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:37.086    10:19:29 rpc -- scripts/common.sh@368 -- # return 0
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:37.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.086  		--rc genhtml_branch_coverage=1
00:04:37.086  		--rc genhtml_function_coverage=1
00:04:37.086  		--rc genhtml_legend=1
00:04:37.086  		--rc geninfo_all_blocks=1
00:04:37.086  		--rc geninfo_unexecuted_blocks=1
00:04:37.086  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:37.086  		'
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:37.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.086  		--rc genhtml_branch_coverage=1
00:04:37.086  		--rc genhtml_function_coverage=1
00:04:37.086  		--rc genhtml_legend=1
00:04:37.086  		--rc geninfo_all_blocks=1
00:04:37.086  		--rc geninfo_unexecuted_blocks=1
00:04:37.086  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:37.086  		'
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:37.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.086  		--rc genhtml_branch_coverage=1
00:04:37.086  		--rc genhtml_function_coverage=1
00:04:37.086  		--rc genhtml_legend=1
00:04:37.086  		--rc geninfo_all_blocks=1
00:04:37.086  		--rc geninfo_unexecuted_blocks=1
00:04:37.086  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:37.086  		'
00:04:37.086    10:19:29 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:37.086  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:37.086  		--rc genhtml_branch_coverage=1
00:04:37.086  		--rc genhtml_function_coverage=1
00:04:37.086  		--rc genhtml_legend=1
00:04:37.086  		--rc geninfo_all_blocks=1
00:04:37.086  		--rc geninfo_unexecuted_blocks=1
00:04:37.086  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:37.086  		'
00:04:37.086   10:19:29 rpc -- rpc/rpc.sh@65 -- # spdk_pid=47521
00:04:37.086   10:19:29 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:37.086   10:19:29 rpc -- rpc/rpc.sh@67 -- # waitforlisten 47521
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@835 -- # '[' -z 47521 ']'
00:04:37.086   10:19:29 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:37.086  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:37.086   10:19:29 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:37.086  [2024-12-09 10:19:29.075907] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:37.086  [2024-12-09 10:19:29.076208] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:04:37.347  EAL: TSC is not safe to use in SMP mode
00:04:37.348  EAL: TSC is not invariant
00:04:37.348  [2024-12-09 10:19:29.457959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:37.348  [2024-12-09 10:19:29.492129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified.
00:04:37.348  [2024-12-09 10:19:29.492174] app.c: 616:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 47521' to capture a snapshot of events at runtime.
00:04:37.348  [2024-12-09 10:19:29.492244] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:04:37.348  [2024-12-09 10:19:29.492260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:37.920   10:19:29 rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:37.920   10:19:29 rpc -- common/autotest_common.sh@868 -- # return 0
00:04:37.920   10:19:29 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:37.920   10:19:29 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc
00:04:37.920   10:19:29 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd
00:04:37.920   10:19:29 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity
00:04:37.920   10:19:29 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:37.920   10:19:29 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:37.920   10:19:29 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:37.920  ************************************
00:04:37.920  START TEST rpc_integrity
00:04:37.920  ************************************
00:04:37.920   10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:37.920    10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.920   10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:37.920    10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:37.920   10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:37.920    10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.920   10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0
00:04:37.920    10:19:29 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.920    10:19:29 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.920    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.920   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:37.920  {
00:04:37.920  "name": "Malloc0",
00:04:37.920  "aliases": [
00:04:37.920  "138768a4-b617-11ef-9b05-d5e34e08fe3b"
00:04:37.920  ],
00:04:37.920  "product_name": "Malloc disk",
00:04:37.920  "block_size": 512,
00:04:37.920  "num_blocks": 16384,
00:04:37.920  "uuid": "138768a4-b617-11ef-9b05-d5e34e08fe3b",
00:04:37.920  "assigned_rate_limits": {
00:04:37.920  "rw_ios_per_sec": 0,
00:04:37.920  "rw_mbytes_per_sec": 0,
00:04:37.920  "r_mbytes_per_sec": 0,
00:04:37.920  "w_mbytes_per_sec": 0
00:04:37.920  },
00:04:37.920  "claimed": false,
00:04:37.920  "zoned": false,
00:04:37.920  "supported_io_types": {
00:04:37.920  "read": true,
00:04:37.920  "write": true,
00:04:37.920  "unmap": true,
00:04:37.920  "flush": true,
00:04:37.920  "reset": true,
00:04:37.920  "nvme_admin": false,
00:04:37.920  "nvme_io": false,
00:04:37.920  "nvme_io_md": false,
00:04:37.920  "write_zeroes": true,
00:04:37.920  "zcopy": true,
00:04:37.920  "get_zone_info": false,
00:04:37.920  "zone_management": false,
00:04:37.920  "zone_append": false,
00:04:37.920  "compare": false,
00:04:37.920  "compare_and_write": false,
00:04:37.920  "abort": true,
00:04:37.920  "seek_hole": false,
00:04:37.920  "seek_data": false,
00:04:37.920  "copy": true,
00:04:37.920  "nvme_iov_md": false
00:04:37.920  },
00:04:37.920  "memory_domains": [
00:04:37.920  {
00:04:37.920  "dma_device_id": "system",
00:04:37.920  "dma_device_type": 1
00:04:37.920  },
00:04:37.920  {
00:04:37.920  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:37.920  "dma_device_type": 2
00:04:37.920  }
00:04:37.920  ],
00:04:37.920  "driver_specific": {}
00:04:37.920  }
00:04:37.920  ]'
00:04:37.920    10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:37.920   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:37.920   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0
00:04:37.920   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.920   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.920  [2024-12-09 10:19:30.023893] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0
00:04:37.920  [2024-12-09 10:19:30.023935] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:37.920  [2024-12-09 10:19:30.024485] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d1a0323aa00
00:04:37.920  [2024-12-09 10:19:30.024519] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:37.920  [2024-12-09 10:19:30.025764] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:37.920  [2024-12-09 10:19:30.025810] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:37.920  Passthru0
00:04:37.920   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.920    10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:37.920    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.920    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.920    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.920   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:37.920  {
00:04:37.920  "name": "Malloc0",
00:04:37.920  "aliases": [
00:04:37.920  "138768a4-b617-11ef-9b05-d5e34e08fe3b"
00:04:37.920  ],
00:04:37.920  "product_name": "Malloc disk",
00:04:37.920  "block_size": 512,
00:04:37.920  "num_blocks": 16384,
00:04:37.920  "uuid": "138768a4-b617-11ef-9b05-d5e34e08fe3b",
00:04:37.920  "assigned_rate_limits": {
00:04:37.920  "rw_ios_per_sec": 0,
00:04:37.920  "rw_mbytes_per_sec": 0,
00:04:37.920  "r_mbytes_per_sec": 0,
00:04:37.920  "w_mbytes_per_sec": 0
00:04:37.920  },
00:04:37.920  "claimed": true,
00:04:37.921  "claim_type": "exclusive_write",
00:04:37.921  "zoned": false,
00:04:37.921  "supported_io_types": {
00:04:37.921  "read": true,
00:04:37.921  "write": true,
00:04:37.921  "unmap": true,
00:04:37.921  "flush": true,
00:04:37.921  "reset": true,
00:04:37.921  "nvme_admin": false,
00:04:37.921  "nvme_io": false,
00:04:37.921  "nvme_io_md": false,
00:04:37.921  "write_zeroes": true,
00:04:37.921  "zcopy": true,
00:04:37.921  "get_zone_info": false,
00:04:37.921  "zone_management": false,
00:04:37.921  "zone_append": false,
00:04:37.921  "compare": false,
00:04:37.921  "compare_and_write": false,
00:04:37.921  "abort": true,
00:04:37.921  "seek_hole": false,
00:04:37.921  "seek_data": false,
00:04:37.921  "copy": true,
00:04:37.921  "nvme_iov_md": false
00:04:37.921  },
00:04:37.921  "memory_domains": [
00:04:37.921  {
00:04:37.921  "dma_device_id": "system",
00:04:37.921  "dma_device_type": 1
00:04:37.921  },
00:04:37.921  {
00:04:37.921  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:37.921  "dma_device_type": 2
00:04:37.921  }
00:04:37.921  ],
00:04:37.921  "driver_specific": {}
00:04:37.921  },
00:04:37.921  {
00:04:37.921  "name": "Passthru0",
00:04:37.921  "aliases": [
00:04:37.921  "a7dbf0d8-6d9f-9e5b-9f32-3da7d5e88b5b"
00:04:37.921  ],
00:04:37.921  "product_name": "passthru",
00:04:37.921  "block_size": 512,
00:04:37.921  "num_blocks": 16384,
00:04:37.921  "uuid": "a7dbf0d8-6d9f-9e5b-9f32-3da7d5e88b5b",
00:04:37.921  "assigned_rate_limits": {
00:04:37.921  "rw_ios_per_sec": 0,
00:04:37.921  "rw_mbytes_per_sec": 0,
00:04:37.921  "r_mbytes_per_sec": 0,
00:04:37.921  "w_mbytes_per_sec": 0
00:04:37.921  },
00:04:37.921  "claimed": false,
00:04:37.921  "zoned": false,
00:04:37.921  "supported_io_types": {
00:04:37.921  "read": true,
00:04:37.921  "write": true,
00:04:37.921  "unmap": true,
00:04:37.921  "flush": true,
00:04:37.921  "reset": true,
00:04:37.921  "nvme_admin": false,
00:04:37.921  "nvme_io": false,
00:04:37.921  "nvme_io_md": false,
00:04:37.921  "write_zeroes": true,
00:04:37.921  "zcopy": true,
00:04:37.921  "get_zone_info": false,
00:04:37.921  "zone_management": false,
00:04:37.921  "zone_append": false,
00:04:37.921  "compare": false,
00:04:37.921  "compare_and_write": false,
00:04:37.921  "abort": true,
00:04:37.921  "seek_hole": false,
00:04:37.921  "seek_data": false,
00:04:37.921  "copy": true,
00:04:37.921  "nvme_iov_md": false
00:04:37.921  },
00:04:37.921  "memory_domains": [
00:04:37.921  {
00:04:37.921  "dma_device_id": "system",
00:04:37.921  "dma_device_type": 1
00:04:37.921  },
00:04:37.921  {
00:04:37.921  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:37.921  "dma_device_type": 2
00:04:37.921  }
00:04:37.921  ],
00:04:37.921  "driver_specific": {
00:04:37.921  "passthru": {
00:04:37.921  "name": "Passthru0",
00:04:37.921  "base_bdev_name": "Malloc0"
00:04:37.921  }
00:04:37.921  }
00:04:37.921  }
00:04:37.921  ]'
00:04:37.921    10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:37.921   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:37.921   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.921   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.921   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.921    10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:37.921    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:37.921    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:37.921    10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:37.921   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:38.182    10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:38.182   10:19:30 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:38.182  
00:04:38.182  real	0m0.120s
00:04:38.182  user	0m0.043s
00:04:38.182  sys	0m0.016s
00:04:38.182   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.182   10:19:30 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.182  ************************************
00:04:38.182  END TEST rpc_integrity
00:04:38.182  ************************************
00:04:38.182   10:19:30 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.182  ************************************
00:04:38.182  START TEST rpc_plugins
00:04:38.182  ************************************
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins
00:04:38.182    10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1
00:04:38.182    10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[
00:04:38.182  {
00:04:38.182  "name": "Malloc1",
00:04:38.182  "aliases": [
00:04:38.182  "13a1a796-b617-11ef-9b05-d5e34e08fe3b"
00:04:38.182  ],
00:04:38.182  "product_name": "Malloc disk",
00:04:38.182  "block_size": 4096,
00:04:38.182  "num_blocks": 256,
00:04:38.182  "uuid": "13a1a796-b617-11ef-9b05-d5e34e08fe3b",
00:04:38.182  "assigned_rate_limits": {
00:04:38.182  "rw_ios_per_sec": 0,
00:04:38.182  "rw_mbytes_per_sec": 0,
00:04:38.182  "r_mbytes_per_sec": 0,
00:04:38.182  "w_mbytes_per_sec": 0
00:04:38.182  },
00:04:38.182  "claimed": false,
00:04:38.182  "zoned": false,
00:04:38.182  "supported_io_types": {
00:04:38.182  "read": true,
00:04:38.182  "write": true,
00:04:38.182  "unmap": true,
00:04:38.182  "flush": true,
00:04:38.182  "reset": true,
00:04:38.182  "nvme_admin": false,
00:04:38.182  "nvme_io": false,
00:04:38.182  "nvme_io_md": false,
00:04:38.182  "write_zeroes": true,
00:04:38.182  "zcopy": true,
00:04:38.182  "get_zone_info": false,
00:04:38.182  "zone_management": false,
00:04:38.182  "zone_append": false,
00:04:38.182  "compare": false,
00:04:38.182  "compare_and_write": false,
00:04:38.182  "abort": true,
00:04:38.182  "seek_hole": false,
00:04:38.182  "seek_data": false,
00:04:38.182  "copy": true,
00:04:38.182  "nvme_iov_md": false
00:04:38.182  },
00:04:38.182  "memory_domains": [
00:04:38.182  {
00:04:38.182  "dma_device_id": "system",
00:04:38.182  "dma_device_type": 1
00:04:38.182  },
00:04:38.182  {
00:04:38.182  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:38.182  "dma_device_type": 2
00:04:38.182  }
00:04:38.182  ],
00:04:38.182  "driver_specific": {}
00:04:38.182  }
00:04:38.182  ]'
00:04:38.182    10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']'
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.182    10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:38.182    10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]'
00:04:38.182    10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length
00:04:38.182   10:19:30 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']'
00:04:38.182  
00:04:38.182  real	0m0.071s
00:04:38.182  user	0m0.009s
00:04:38.182  sys	0m0.022s
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.182  ************************************
00:04:38.182  END TEST rpc_plugins
00:04:38.182  ************************************
00:04:38.182   10:19:30 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x
00:04:38.182   10:19:30 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.182   10:19:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.182  ************************************
00:04:38.182  START TEST rpc_trace_cmd_test
00:04:38.182  ************************************
00:04:38.182   10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test
00:04:38.182   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info
00:04:38.182    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info
00:04:38.182    10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.182    10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:38.182    10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.182   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{
00:04:38.182  "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid47521",
00:04:38.182  "tpoint_group_mask": "0x8",
00:04:38.182  "iscsi_conn": {
00:04:38.182  "mask": "0x2",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "scsi": {
00:04:38.182  "mask": "0x4",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "bdev": {
00:04:38.182  "mask": "0x8",
00:04:38.182  "tpoint_mask": "0xffffffffffffffff"
00:04:38.182  },
00:04:38.182  "nvmf_rdma": {
00:04:38.182  "mask": "0x10",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "nvmf_tcp": {
00:04:38.182  "mask": "0x20",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "blobfs": {
00:04:38.182  "mask": "0x80",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "dsa": {
00:04:38.182  "mask": "0x200",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "thread": {
00:04:38.182  "mask": "0x400",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "nvme_pcie": {
00:04:38.182  "mask": "0x800",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "iaa": {
00:04:38.182  "mask": "0x1000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "nvme_tcp": {
00:04:38.182  "mask": "0x2000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "bdev_nvme": {
00:04:38.182  "mask": "0x4000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "sock": {
00:04:38.182  "mask": "0x8000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "blob": {
00:04:38.182  "mask": "0x10000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "bdev_raid": {
00:04:38.182  "mask": "0x20000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  },
00:04:38.182  "scheduler": {
00:04:38.182  "mask": "0x40000",
00:04:38.182  "tpoint_mask": "0x0"
00:04:38.182  }
00:04:38.182  }'
00:04:38.183    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length
00:04:38.183   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']'
00:04:38.183    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")'
00:04:38.183   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']'
00:04:38.183    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")'
00:04:38.183   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']'
00:04:38.183    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")'
00:04:38.445   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']'
00:04:38.445    10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask
00:04:38.445   10:19:30 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']'
00:04:38.445  
00:04:38.445  real	0m0.056s
00:04:38.445  user	0m0.019s
00:04:38.445  sys	0m0.030s
00:04:38.445   10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.445  ************************************
00:04:38.445  END TEST rpc_trace_cmd_test
00:04:38.445  ************************************
00:04:38.445   10:19:30 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x
00:04:38.445   10:19:30 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]]
00:04:38.445   10:19:30 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd
00:04:38.445   10:19:30 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity
00:04:38.445   10:19:30 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.445   10:19:30 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.445   10:19:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.445  ************************************
00:04:38.445  START TEST rpc_daemon_integrity
00:04:38.445  ************************************
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]'
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']'
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[
00:04:38.445  {
00:04:38.445  "name": "Malloc2",
00:04:38.445  "aliases": [
00:04:38.445  "13ccfd98-b617-11ef-9b05-d5e34e08fe3b"
00:04:38.445  ],
00:04:38.445  "product_name": "Malloc disk",
00:04:38.445  "block_size": 512,
00:04:38.445  "num_blocks": 16384,
00:04:38.445  "uuid": "13ccfd98-b617-11ef-9b05-d5e34e08fe3b",
00:04:38.445  "assigned_rate_limits": {
00:04:38.445  "rw_ios_per_sec": 0,
00:04:38.445  "rw_mbytes_per_sec": 0,
00:04:38.445  "r_mbytes_per_sec": 0,
00:04:38.445  "w_mbytes_per_sec": 0
00:04:38.445  },
00:04:38.445  "claimed": false,
00:04:38.445  "zoned": false,
00:04:38.445  "supported_io_types": {
00:04:38.445  "read": true,
00:04:38.445  "write": true,
00:04:38.445  "unmap": true,
00:04:38.445  "flush": true,
00:04:38.445  "reset": true,
00:04:38.445  "nvme_admin": false,
00:04:38.445  "nvme_io": false,
00:04:38.445  "nvme_io_md": false,
00:04:38.445  "write_zeroes": true,
00:04:38.445  "zcopy": true,
00:04:38.445  "get_zone_info": false,
00:04:38.445  "zone_management": false,
00:04:38.445  "zone_append": false,
00:04:38.445  "compare": false,
00:04:38.445  "compare_and_write": false,
00:04:38.445  "abort": true,
00:04:38.445  "seek_hole": false,
00:04:38.445  "seek_data": false,
00:04:38.445  "copy": true,
00:04:38.445  "nvme_iov_md": false
00:04:38.445  },
00:04:38.445  "memory_domains": [
00:04:38.445  {
00:04:38.445  "dma_device_id": "system",
00:04:38.445  "dma_device_type": 1
00:04:38.445  },
00:04:38.445  {
00:04:38.445  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:38.445  "dma_device_type": 2
00:04:38.445  }
00:04:38.445  ],
00:04:38.445  "driver_specific": {}
00:04:38.445  }
00:04:38.445  ]'
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']'
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.445  [2024-12-09 10:19:30.483902] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2
00:04:38.445  [2024-12-09 10:19:30.483946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:38.445  [2024-12-09 10:19:30.483972] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d1a0323aa00
00:04:38.445  [2024-12-09 10:19:30.483979] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:38.445  [2024-12-09 10:19:30.484719] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:38.445  [2024-12-09 10:19:30.484756] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0
00:04:38.445  Passthru0
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.445    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.445   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[
00:04:38.445  {
00:04:38.445  "name": "Malloc2",
00:04:38.445  "aliases": [
00:04:38.445  "13ccfd98-b617-11ef-9b05-d5e34e08fe3b"
00:04:38.445  ],
00:04:38.445  "product_name": "Malloc disk",
00:04:38.445  "block_size": 512,
00:04:38.445  "num_blocks": 16384,
00:04:38.445  "uuid": "13ccfd98-b617-11ef-9b05-d5e34e08fe3b",
00:04:38.445  "assigned_rate_limits": {
00:04:38.445  "rw_ios_per_sec": 0,
00:04:38.445  "rw_mbytes_per_sec": 0,
00:04:38.445  "r_mbytes_per_sec": 0,
00:04:38.445  "w_mbytes_per_sec": 0
00:04:38.445  },
00:04:38.445  "claimed": true,
00:04:38.445  "claim_type": "exclusive_write",
00:04:38.445  "zoned": false,
00:04:38.445  "supported_io_types": {
00:04:38.445  "read": true,
00:04:38.445  "write": true,
00:04:38.445  "unmap": true,
00:04:38.445  "flush": true,
00:04:38.445  "reset": true,
00:04:38.445  "nvme_admin": false,
00:04:38.445  "nvme_io": false,
00:04:38.445  "nvme_io_md": false,
00:04:38.445  "write_zeroes": true,
00:04:38.445  "zcopy": true,
00:04:38.445  "get_zone_info": false,
00:04:38.445  "zone_management": false,
00:04:38.445  "zone_append": false,
00:04:38.445  "compare": false,
00:04:38.445  "compare_and_write": false,
00:04:38.445  "abort": true,
00:04:38.445  "seek_hole": false,
00:04:38.445  "seek_data": false,
00:04:38.445  "copy": true,
00:04:38.445  "nvme_iov_md": false
00:04:38.445  },
00:04:38.445  "memory_domains": [
00:04:38.445  {
00:04:38.445  "dma_device_id": "system",
00:04:38.445  "dma_device_type": 1
00:04:38.445  },
00:04:38.445  {
00:04:38.445  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:38.445  "dma_device_type": 2
00:04:38.445  }
00:04:38.445  ],
00:04:38.445  "driver_specific": {}
00:04:38.445  },
00:04:38.445  {
00:04:38.445  "name": "Passthru0",
00:04:38.445  "aliases": [
00:04:38.445  "a59014de-1944-1158-90b0-f48e3512304e"
00:04:38.445  ],
00:04:38.445  "product_name": "passthru",
00:04:38.445  "block_size": 512,
00:04:38.445  "num_blocks": 16384,
00:04:38.445  "uuid": "a59014de-1944-1158-90b0-f48e3512304e",
00:04:38.445  "assigned_rate_limits": {
00:04:38.445  "rw_ios_per_sec": 0,
00:04:38.445  "rw_mbytes_per_sec": 0,
00:04:38.445  "r_mbytes_per_sec": 0,
00:04:38.445  "w_mbytes_per_sec": 0
00:04:38.445  },
00:04:38.445  "claimed": false,
00:04:38.445  "zoned": false,
00:04:38.445  "supported_io_types": {
00:04:38.445  "read": true,
00:04:38.445  "write": true,
00:04:38.445  "unmap": true,
00:04:38.445  "flush": true,
00:04:38.445  "reset": true,
00:04:38.445  "nvme_admin": false,
00:04:38.445  "nvme_io": false,
00:04:38.445  "nvme_io_md": false,
00:04:38.445  "write_zeroes": true,
00:04:38.445  "zcopy": true,
00:04:38.445  "get_zone_info": false,
00:04:38.445  "zone_management": false,
00:04:38.445  "zone_append": false,
00:04:38.445  "compare": false,
00:04:38.445  "compare_and_write": false,
00:04:38.445  "abort": true,
00:04:38.445  "seek_hole": false,
00:04:38.445  "seek_data": false,
00:04:38.445  "copy": true,
00:04:38.445  "nvme_iov_md": false
00:04:38.445  },
00:04:38.445  "memory_domains": [
00:04:38.445  {
00:04:38.445  "dma_device_id": "system",
00:04:38.445  "dma_device_type": 1
00:04:38.445  },
00:04:38.445  {
00:04:38.445  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:04:38.445  "dma_device_type": 2
00:04:38.445  }
00:04:38.445  ],
00:04:38.445  "driver_specific": {
00:04:38.446  "passthru": {
00:04:38.446  "name": "Passthru0",
00:04:38.446  "base_bdev_name": "Malloc2"
00:04:38.446  }
00:04:38.446  }
00:04:38.446  }
00:04:38.446  ]'
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']'
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]'
00:04:38.446    10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']'
00:04:38.446  
00:04:38.446  real	0m0.127s
00:04:38.446  user	0m0.056s
00:04:38.446  sys	0m0.008s
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.446   10:19:30 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x
00:04:38.446  ************************************
00:04:38.446  END TEST rpc_daemon_integrity
00:04:38.446  ************************************
00:04:38.707   10:19:30 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT
00:04:38.707   10:19:30 rpc -- rpc/rpc.sh@84 -- # killprocess 47521
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@954 -- # '[' -z 47521 ']'
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@958 -- # kill -0 47521
00:04:38.707    10:19:30 rpc -- common/autotest_common.sh@959 -- # uname
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:04:38.707    10:19:30 rpc -- common/autotest_common.sh@962 -- # ps -c -o command 47521
00:04:38.707    10:19:30 rpc -- common/autotest_common.sh@962 -- # tail -1
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:04:38.707  killing process with pid 47521
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47521'
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@973 -- # kill 47521
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@978 -- # wait 47521
00:04:38.707  
00:04:38.707  real	0m1.918s
00:04:38.707  user	0m1.900s
00:04:38.707  sys	0m0.784s
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:38.707   10:19:30 rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.707  ************************************
00:04:38.707  END TEST rpc
00:04:38.707  ************************************
00:04:38.707   10:19:30  -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:04:38.707   10:19:30  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.707   10:19:30  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.707   10:19:30  -- common/autotest_common.sh@10 -- # set +x
00:04:38.707  ************************************
00:04:38.707  START TEST skip_rpc
00:04:38.707  ************************************
00:04:38.707   10:19:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh
00:04:38.967  * Looking for test storage...
00:04:38.967  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc
00:04:38.967    10:19:30 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:38.967     10:19:30 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:38.967     10:19:30 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@344 -- # case "$op" in
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@345 -- # : 1
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@365 -- # decimal 1
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@353 -- # local d=1
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@355 -- # echo 1
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@366 -- # decimal 2
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@353 -- # local d=2
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:38.968     10:19:31 skip_rpc -- scripts/common.sh@355 -- # echo 2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:38.968    10:19:31 skip_rpc -- scripts/common.sh@368 -- # return 0
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:38.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.968  		--rc genhtml_branch_coverage=1
00:04:38.968  		--rc genhtml_function_coverage=1
00:04:38.968  		--rc genhtml_legend=1
00:04:38.968  		--rc geninfo_all_blocks=1
00:04:38.968  		--rc geninfo_unexecuted_blocks=1
00:04:38.968  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:38.968  		'
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:38.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.968  		--rc genhtml_branch_coverage=1
00:04:38.968  		--rc genhtml_function_coverage=1
00:04:38.968  		--rc genhtml_legend=1
00:04:38.968  		--rc geninfo_all_blocks=1
00:04:38.968  		--rc geninfo_unexecuted_blocks=1
00:04:38.968  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:38.968  		'
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:38.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.968  		--rc genhtml_branch_coverage=1
00:04:38.968  		--rc genhtml_function_coverage=1
00:04:38.968  		--rc genhtml_legend=1
00:04:38.968  		--rc geninfo_all_blocks=1
00:04:38.968  		--rc geninfo_unexecuted_blocks=1
00:04:38.968  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:38.968  		'
00:04:38.968    10:19:31 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:38.968  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:38.968  		--rc genhtml_branch_coverage=1
00:04:38.968  		--rc genhtml_function_coverage=1
00:04:38.968  		--rc genhtml_legend=1
00:04:38.968  		--rc geninfo_all_blocks=1
00:04:38.968  		--rc geninfo_unexecuted_blocks=1
00:04:38.968  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:38.968  		'
00:04:38.968   10:19:31 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:38.968   10:19:31 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:38.968   10:19:31 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc
00:04:38.968   10:19:31 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:38.968   10:19:31 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:38.968   10:19:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:38.968  ************************************
00:04:38.968  START TEST skip_rpc
00:04:38.968  ************************************
00:04:38.968   10:19:31 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc
00:04:38.968   10:19:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=47705
00:04:38.968   10:19:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:38.968   10:19:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5
00:04:38.968   10:19:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1
00:04:38.968  [2024-12-09 10:19:31.077056] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:38.968  [2024-12-09 10:19:31.077238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:04:39.536  EAL: TSC is not safe to use in SMP mode
00:04:39.536  EAL: TSC is not invariant
00:04:39.536  [2024-12-09 10:19:31.448023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:39.536  [2024-12-09 10:19:31.477307] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:04:39.536  [2024-12-09 10:19:31.477357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:44.807    10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 47705
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 47705 ']'
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 47705
00:04:44.807    10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:04:44.807    10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # ps -c -o command 47705
00:04:44.807    10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # tail -1
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:04:44.807  killing process with pid 47705
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47705'
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 47705
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 47705
00:04:44.807  
00:04:44.807  real	0m5.164s
00:04:44.807  user	0m4.782s
00:04:44.807  sys	0m0.401s
00:04:44.807  ************************************
00:04:44.807  END TEST skip_rpc
00:04:44.807  ************************************
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:44.807   10:19:36 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.807   10:19:36 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json
00:04:44.807   10:19:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:44.807   10:19:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:44.807   10:19:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:44.807  ************************************
00:04:44.807  START TEST skip_rpc_with_json
00:04:44.807  ************************************
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=47750
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 47750
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 47750 ']'
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:44.807  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:44.807   10:19:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:44.807  [2024-12-09 10:19:36.293405] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:44.807  [2024-12-09 10:19:36.293588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:04:44.807  EAL: TSC is not safe to use in SMP mode
00:04:44.807  EAL: TSC is not invariant
00:04:44.807  [2024-12-09 10:19:36.613684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:44.807  [2024-12-09 10:19:36.638824] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:04:44.807  [2024-12-09 10:19:36.638861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:45.125  [2024-12-09 10:19:37.173025] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist
00:04:45.125  request:
00:04:45.125  {
00:04:45.125  "trtype": "tcp",
00:04:45.125  "method": "nvmf_get_transports",
00:04:45.125  "req_id": 1
00:04:45.125  }
00:04:45.125  Got JSON-RPC error response
00:04:45.125  response:
00:04:45.125  {
00:04:45.125  "code": -19,
00:04:45.125  "message": "Operation not supported by device"
00:04:45.125  }
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]]
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:45.125  [2024-12-09 10:19:37.185038] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init ***
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable
00:04:45.125   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:45.401   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:04:45.401   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:45.401  {
00:04:45.401  "subsystems": [
00:04:45.401  {
00:04:45.401  "subsystem": "vmd",
00:04:45.401  "config": []
00:04:45.401  },
00:04:45.401  {
00:04:45.401  "subsystem": "iobuf",
00:04:45.401  "config": [
00:04:45.401  {
00:04:45.401  "method": "iobuf_set_options",
00:04:45.401  "params": {
00:04:45.401  "small_pool_count": 8192,
00:04:45.401  "large_pool_count": 1024,
00:04:45.401  "small_bufsize": 8192,
00:04:45.401  "large_bufsize": 135168,
00:04:45.401  "enable_numa": false
00:04:45.401  }
00:04:45.401  }
00:04:45.401  ]
00:04:45.401  },
00:04:45.401  {
00:04:45.401  "subsystem": "scheduler",
00:04:45.401  "config": [
00:04:45.401  {
00:04:45.401  "method": "framework_set_scheduler",
00:04:45.401  "params": {
00:04:45.401  "name": "static"
00:04:45.401  }
00:04:45.401  }
00:04:45.401  ]
00:04:45.401  },
00:04:45.401  {
00:04:45.401  "subsystem": "sock",
00:04:45.401  "config": [
00:04:45.401  {
00:04:45.401  "method": "sock_set_default_impl",
00:04:45.401  "params": {
00:04:45.401  "impl_name": "posix"
00:04:45.401  }
00:04:45.401  },
00:04:45.401  {
00:04:45.401  "method": "sock_impl_set_options",
00:04:45.401  "params": {
00:04:45.401  "impl_name": "ssl",
00:04:45.401  "recv_buf_size": 4096,
00:04:45.401  "send_buf_size": 4096,
00:04:45.401  "enable_recv_pipe": true,
00:04:45.401  "enable_quickack": false,
00:04:45.401  "enable_placement_id": 0,
00:04:45.401  "enable_zerocopy_send_server": true,
00:04:45.402  "enable_zerocopy_send_client": false,
00:04:45.402  "zerocopy_threshold": 0,
00:04:45.402  "tls_version": 0,
00:04:45.402  "enable_ktls": false
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "sock_impl_set_options",
00:04:45.402  "params": {
00:04:45.402  "impl_name": "posix",
00:04:45.402  "recv_buf_size": 2097152,
00:04:45.402  "send_buf_size": 2097152,
00:04:45.402  "enable_recv_pipe": true,
00:04:45.402  "enable_quickack": false,
00:04:45.402  "enable_placement_id": 0,
00:04:45.402  "enable_zerocopy_send_server": true,
00:04:45.402  "enable_zerocopy_send_client": false,
00:04:45.402  "zerocopy_threshold": 0,
00:04:45.402  "tls_version": 0,
00:04:45.402  "enable_ktls": false
00:04:45.402  }
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "keyring",
00:04:45.402  "config": []
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "accel",
00:04:45.402  "config": [
00:04:45.402  {
00:04:45.402  "method": "accel_set_options",
00:04:45.402  "params": {
00:04:45.402  "small_cache_size": 128,
00:04:45.402  "large_cache_size": 16,
00:04:45.402  "task_count": 2048,
00:04:45.402  "sequence_count": 2048,
00:04:45.402  "buf_count": 2048
00:04:45.402  }
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "bdev",
00:04:45.402  "config": [
00:04:45.402  {
00:04:45.402  "method": "bdev_set_options",
00:04:45.402  "params": {
00:04:45.402  "bdev_io_pool_size": 65535,
00:04:45.402  "bdev_io_cache_size": 256,
00:04:45.402  "bdev_auto_examine": true,
00:04:45.402  "iobuf_small_cache_size": 128,
00:04:45.402  "iobuf_large_cache_size": 16
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "bdev_raid_set_options",
00:04:45.402  "params": {
00:04:45.402  "process_window_size_kb": 1024,
00:04:45.402  "process_max_bandwidth_mb_sec": 0
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "bdev_nvme_set_options",
00:04:45.402  "params": {
00:04:45.402  "action_on_timeout": "none",
00:04:45.402  "timeout_us": 0,
00:04:45.402  "timeout_admin_us": 0,
00:04:45.402  "keep_alive_timeout_ms": 10000,
00:04:45.402  "arbitration_burst": 0,
00:04:45.402  "low_priority_weight": 0,
00:04:45.402  "medium_priority_weight": 0,
00:04:45.402  "high_priority_weight": 0,
00:04:45.402  "nvme_adminq_poll_period_us": 10000,
00:04:45.402  "nvme_ioq_poll_period_us": 0,
00:04:45.402  "io_queue_requests": 0,
00:04:45.402  "delay_cmd_submit": true,
00:04:45.402  "transport_retry_count": 4,
00:04:45.402  "bdev_retry_count": 3,
00:04:45.402  "transport_ack_timeout": 0,
00:04:45.402  "ctrlr_loss_timeout_sec": 0,
00:04:45.402  "reconnect_delay_sec": 0,
00:04:45.402  "fast_io_fail_timeout_sec": 0,
00:04:45.402  "disable_auto_failback": false,
00:04:45.402  "generate_uuids": false,
00:04:45.402  "transport_tos": 0,
00:04:45.402  "nvme_error_stat": false,
00:04:45.402  "rdma_srq_size": 0,
00:04:45.402  "io_path_stat": false,
00:04:45.402  "allow_accel_sequence": false,
00:04:45.402  "rdma_max_cq_size": 0,
00:04:45.402  "rdma_cm_event_timeout_ms": 0,
00:04:45.402  "dhchap_digests": [
00:04:45.402  "sha256",
00:04:45.402  "sha384",
00:04:45.402  "sha512"
00:04:45.402  ],
00:04:45.402  "dhchap_dhgroups": [
00:04:45.402  "null",
00:04:45.402  "ffdhe2048",
00:04:45.402  "ffdhe3072",
00:04:45.402  "ffdhe4096",
00:04:45.402  "ffdhe6144",
00:04:45.402  "ffdhe8192"
00:04:45.402  ]
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "bdev_nvme_set_hotplug",
00:04:45.402  "params": {
00:04:45.402  "period_us": 100000,
00:04:45.402  "enable": false
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "bdev_wait_for_examine"
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "scsi",
00:04:45.402  "config": null
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "nvmf",
00:04:45.402  "config": [
00:04:45.402  {
00:04:45.402  "method": "nvmf_set_config",
00:04:45.402  "params": {
00:04:45.402  "discovery_filter": "match_any",
00:04:45.402  "admin_cmd_passthru": {
00:04:45.402  "identify_ctrlr": false
00:04:45.402  },
00:04:45.402  "dhchap_digests": [
00:04:45.402  "sha256",
00:04:45.402  "sha384",
00:04:45.402  "sha512"
00:04:45.402  ],
00:04:45.402  "dhchap_dhgroups": [
00:04:45.402  "null",
00:04:45.402  "ffdhe2048",
00:04:45.402  "ffdhe3072",
00:04:45.402  "ffdhe4096",
00:04:45.402  "ffdhe6144",
00:04:45.402  "ffdhe8192"
00:04:45.402  ]
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "nvmf_set_max_subsystems",
00:04:45.402  "params": {
00:04:45.402  "max_subsystems": 1024
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "nvmf_set_crdt",
00:04:45.402  "params": {
00:04:45.402  "crdt1": 0,
00:04:45.402  "crdt2": 0,
00:04:45.402  "crdt3": 0
00:04:45.402  }
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "method": "nvmf_create_transport",
00:04:45.402  "params": {
00:04:45.402  "trtype": "TCP",
00:04:45.402  "max_queue_depth": 128,
00:04:45.402  "max_io_qpairs_per_ctrlr": 127,
00:04:45.402  "in_capsule_data_size": 4096,
00:04:45.402  "max_io_size": 131072,
00:04:45.402  "io_unit_size": 131072,
00:04:45.402  "max_aq_depth": 128,
00:04:45.402  "num_shared_buffers": 511,
00:04:45.402  "buf_cache_size": 4294967295,
00:04:45.402  "dif_insert_or_strip": false,
00:04:45.402  "zcopy": false,
00:04:45.402  "c2h_success": true,
00:04:45.402  "sock_priority": 0,
00:04:45.402  "abort_timeout_sec": 1,
00:04:45.402  "ack_timeout": 0,
00:04:45.402  "data_wr_pool_size": 0
00:04:45.402  }
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  },
00:04:45.402  {
00:04:45.402  "subsystem": "iscsi",
00:04:45.402  "config": [
00:04:45.402  {
00:04:45.402  "method": "iscsi_set_options",
00:04:45.402  "params": {
00:04:45.402  "node_base": "iqn.2016-06.io.spdk",
00:04:45.402  "max_sessions": 128,
00:04:45.402  "max_connections_per_session": 2,
00:04:45.402  "max_queue_depth": 64,
00:04:45.402  "default_time2wait": 2,
00:04:45.402  "default_time2retain": 20,
00:04:45.402  "first_burst_length": 8192,
00:04:45.402  "immediate_data": true,
00:04:45.402  "allow_duplicated_isid": false,
00:04:45.402  "error_recovery_level": 0,
00:04:45.402  "nop_timeout": 60,
00:04:45.402  "nop_in_interval": 30,
00:04:45.402  "disable_chap": false,
00:04:45.402  "require_chap": false,
00:04:45.402  "mutual_chap": false,
00:04:45.402  "chap_group": 0,
00:04:45.402  "max_large_datain_per_connection": 64,
00:04:45.402  "max_r2t_per_connection": 4,
00:04:45.402  "pdu_pool_size": 36864,
00:04:45.402  "immediate_data_pool_size": 16384,
00:04:45.402  "data_out_pool_size": 2048
00:04:45.402  }
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  }
00:04:45.402  ]
00:04:45.402  }
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 47750
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 47750 ']'
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 47750
00:04:45.402    10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:04:45.402    10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # tail -1
00:04:45.402    10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # ps -c -o command 47750
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:04:45.402  killing process with pid 47750
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47750'
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 47750
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 47750
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=47764
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5
00:04:45.402   10:19:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 47764
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 47764 ']'
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 47764
00:04:50.669    10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:04:50.669    10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # ps -c -o command 47764
00:04:50.669    10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # tail -1
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:04:50.669  killing process with pid 47764
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 47764'
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 47764
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 47764
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt
00:04:50.669  
00:04:50.669  real	0m6.285s
00:04:50.669  user	0m5.924s
00:04:50.669  sys	0m0.744s
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:50.669   10:19:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x
00:04:50.669  ************************************
00:04:50.669  END TEST skip_rpc_with_json
00:04:50.669  ************************************
00:04:50.669   10:19:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay
00:04:50.670   10:19:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:50.670   10:19:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:50.670   10:19:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:50.670  ************************************
00:04:50.670  START TEST skip_rpc_with_delay
00:04:50.670  ************************************
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:50.670    10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:50.670    10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]]
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc
00:04:50.670  [2024-12-09 10:19:42.610933] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started.
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:04:50.670  
00:04:50.670  real	0m0.014s
00:04:50.670  user	0m0.006s
00:04:50.670  sys	0m0.009s
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:50.670   10:19:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x
00:04:50.670  ************************************
00:04:50.670  END TEST skip_rpc_with_delay
00:04:50.670  ************************************
00:04:50.670    10:19:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname
00:04:50.670   10:19:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']'
00:04:50.670   10:19:42 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json
00:04:50.670  
00:04:50.670  real	0m11.793s
00:04:50.670  user	0m10.891s
00:04:50.670  sys	0m1.313s
00:04:50.670   10:19:42 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:50.670   10:19:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x
00:04:50.670  ************************************
00:04:50.670  END TEST skip_rpc
00:04:50.670  ************************************
00:04:50.670   10:19:42  -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:50.670   10:19:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:50.670   10:19:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:50.670   10:19:42  -- common/autotest_common.sh@10 -- # set +x
00:04:50.670  ************************************
00:04:50.670  START TEST rpc_client
00:04:50.670  ************************************
00:04:50.670   10:19:42 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh
00:04:50.670  * Looking for test storage...
00:04:50.670  * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client
00:04:50.670    10:19:42 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:50.670     10:19:42 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:50.670     10:19:42 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@336 -- # IFS=.-:
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@336 -- # read -ra ver1
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@337 -- # IFS=.-:
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@337 -- # read -ra ver2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@338 -- # local 'op=<'
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@340 -- # ver1_l=2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@341 -- # ver2_l=1
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@344 -- # case "$op" in
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@345 -- # : 1
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@365 -- # decimal 1
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@353 -- # local d=1
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@355 -- # echo 1
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@366 -- # decimal 2
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@353 -- # local d=2
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:50.929     10:19:42 rpc_client -- scripts/common.sh@355 -- # echo 2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:50.929    10:19:42 rpc_client -- scripts/common.sh@368 -- # return 0
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:50.929  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.929  		--rc genhtml_branch_coverage=1
00:04:50.929  		--rc genhtml_function_coverage=1
00:04:50.929  		--rc genhtml_legend=1
00:04:50.929  		--rc geninfo_all_blocks=1
00:04:50.929  		--rc geninfo_unexecuted_blocks=1
00:04:50.929  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.929  		'
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:50.929  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.929  		--rc genhtml_branch_coverage=1
00:04:50.929  		--rc genhtml_function_coverage=1
00:04:50.929  		--rc genhtml_legend=1
00:04:50.929  		--rc geninfo_all_blocks=1
00:04:50.929  		--rc geninfo_unexecuted_blocks=1
00:04:50.929  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.929  		'
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:50.929  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.929  		--rc genhtml_branch_coverage=1
00:04:50.929  		--rc genhtml_function_coverage=1
00:04:50.929  		--rc genhtml_legend=1
00:04:50.929  		--rc geninfo_all_blocks=1
00:04:50.929  		--rc geninfo_unexecuted_blocks=1
00:04:50.929  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.929  		'
00:04:50.929    10:19:42 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:50.929  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.929  		--rc genhtml_branch_coverage=1
00:04:50.929  		--rc genhtml_function_coverage=1
00:04:50.929  		--rc genhtml_legend=1
00:04:50.929  		--rc geninfo_all_blocks=1
00:04:50.929  		--rc geninfo_unexecuted_blocks=1
00:04:50.929  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.929  		'
00:04:50.929   10:19:42 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test
00:04:50.929  OK
00:04:50.929   10:19:42 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT
00:04:50.929  
00:04:50.929  real	0m0.178s
00:04:50.929  user	0m0.118s
00:04:50.929  sys	0m0.107s
00:04:50.929   10:19:42 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:50.929   10:19:42 rpc_client -- common/autotest_common.sh@10 -- # set +x
00:04:50.929  ************************************
00:04:50.929  END TEST rpc_client
00:04:50.929  ************************************
00:04:50.929   10:19:42  -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:50.929   10:19:42  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:50.929   10:19:42  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:50.929   10:19:42  -- common/autotest_common.sh@10 -- # set +x
00:04:50.929  ************************************
00:04:50.929  START TEST json_config
00:04:50.929  ************************************
00:04:50.929   10:19:42 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh
00:04:50.929    10:19:42 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:50.929     10:19:42 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:50.929     10:19:42 json_config -- common/autotest_common.sh@1711 -- # lcov --version
00:04:50.929    10:19:43 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:50.929    10:19:43 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:50.929    10:19:43 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:50.929    10:19:43 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:50.929    10:19:43 json_config -- scripts/common.sh@336 -- # IFS=.-:
00:04:50.929    10:19:43 json_config -- scripts/common.sh@336 -- # read -ra ver1
00:04:50.929    10:19:43 json_config -- scripts/common.sh@337 -- # IFS=.-:
00:04:50.929    10:19:43 json_config -- scripts/common.sh@337 -- # read -ra ver2
00:04:50.930    10:19:43 json_config -- scripts/common.sh@338 -- # local 'op=<'
00:04:50.930    10:19:43 json_config -- scripts/common.sh@340 -- # ver1_l=2
00:04:50.930    10:19:43 json_config -- scripts/common.sh@341 -- # ver2_l=1
00:04:50.930    10:19:43 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:50.930    10:19:43 json_config -- scripts/common.sh@344 -- # case "$op" in
00:04:50.930    10:19:43 json_config -- scripts/common.sh@345 -- # : 1
00:04:50.930    10:19:43 json_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:50.930    10:19:43 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:50.930     10:19:43 json_config -- scripts/common.sh@365 -- # decimal 1
00:04:50.930     10:19:43 json_config -- scripts/common.sh@353 -- # local d=1
00:04:50.930     10:19:43 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:50.930     10:19:43 json_config -- scripts/common.sh@355 -- # echo 1
00:04:50.930    10:19:43 json_config -- scripts/common.sh@365 -- # ver1[v]=1
00:04:50.930     10:19:43 json_config -- scripts/common.sh@366 -- # decimal 2
00:04:50.930     10:19:43 json_config -- scripts/common.sh@353 -- # local d=2
00:04:50.930     10:19:43 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:50.930     10:19:43 json_config -- scripts/common.sh@355 -- # echo 2
00:04:50.930    10:19:43 json_config -- scripts/common.sh@366 -- # ver2[v]=2
00:04:50.930    10:19:43 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:50.930    10:19:43 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:50.930    10:19:43 json_config -- scripts/common.sh@368 -- # return 0
00:04:50.930    10:19:43 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:50.930    10:19:43 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:50.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.930  		--rc genhtml_branch_coverage=1
00:04:50.930  		--rc genhtml_function_coverage=1
00:04:50.930  		--rc genhtml_legend=1
00:04:50.930  		--rc geninfo_all_blocks=1
00:04:50.930  		--rc geninfo_unexecuted_blocks=1
00:04:50.930  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.930  		'
00:04:50.930    10:19:43 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:50.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.930  		--rc genhtml_branch_coverage=1
00:04:50.930  		--rc genhtml_function_coverage=1
00:04:50.930  		--rc genhtml_legend=1
00:04:50.930  		--rc geninfo_all_blocks=1
00:04:50.930  		--rc geninfo_unexecuted_blocks=1
00:04:50.930  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.930  		'
00:04:50.930    10:19:43 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:50.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.930  		--rc genhtml_branch_coverage=1
00:04:50.930  		--rc genhtml_function_coverage=1
00:04:50.930  		--rc genhtml_legend=1
00:04:50.930  		--rc geninfo_all_blocks=1
00:04:50.930  		--rc geninfo_unexecuted_blocks=1
00:04:50.930  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.930  		'
00:04:50.930    10:19:43 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:50.930  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:50.930  		--rc genhtml_branch_coverage=1
00:04:50.930  		--rc genhtml_function_coverage=1
00:04:50.930  		--rc genhtml_legend=1
00:04:50.930  		--rc geninfo_all_blocks=1
00:04:50.930  		--rc geninfo_unexecuted_blocks=1
00:04:50.930  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:50.930  		'
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:50.930     10:19:43 json_config -- nvmf/common.sh@7 -- # uname -s
00:04:50.930    10:19:43 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]]
00:04:50.930    10:19:43 json_config -- nvmf/common.sh@7 -- # return 0
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]]
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]]
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]]
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + 	SPDK_TEST_ISCSI + 	SPDK_TEST_NVMF + 	SPDK_TEST_VHOST + 	SPDK_TEST_VHOST_INIT + 	SPDK_TEST_RBD == 0 ))
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='')
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock')
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024')
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json')
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:50.930  INFO: JSON configuration test init
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init'
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@364 -- # json_config_test_init
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:50.930   10:19:43 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc
00:04:50.930   10:19:43 json_config -- json_config/common.sh@9 -- # local app=target
00:04:50.930   10:19:43 json_config -- json_config/common.sh@10 -- # shift
00:04:50.930   10:19:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:50.930   10:19:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:50.930   10:19:43 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:04:50.930   10:19:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:50.930   10:19:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:50.930   10:19:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=47935
00:04:50.930  Waiting for target to run...
00:04:50.930   10:19:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:50.930   10:19:43 json_config -- json_config/common.sh@25 -- # waitforlisten 47935 /var/tmp/spdk_tgt.sock
00:04:50.930   10:19:43 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 47935 ']'
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:50.930  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:50.930   10:19:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:50.930  [2024-12-09 10:19:43.048839] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:50.930  [2024-12-09 10:19:43.049002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:04:51.189  EAL: TSC is not safe to use in SMP mode
00:04:51.189  EAL: TSC is not invariant
00:04:51.189  [2024-12-09 10:19:43.207223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:51.189  [2024-12-09 10:19:43.232017] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:04:51.189  [2024-12-09 10:19:43.232060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@868 -- # return 0
00:04:52.122  
00:04:52.122   10:19:43 json_config -- json_config/common.sh@26 -- # echo ''
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@276 -- # create_accel_config
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]]
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:52.122   10:19:43 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems
00:04:52.122   10:19:43 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config
00:04:52.122   10:19:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config
00:04:52.122  [2024-12-09 10:19:44.158616] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types
00:04:52.122   10:19:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:52.122   10:19:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@45 -- # local ret=0
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister')
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@46 -- # local enabled_types
00:04:52.122   10:19:44 json_config -- json_config/json_config.sh@47 -- # [[ n == y ]]
00:04:52.122    10:19:44 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types
00:04:52.122    10:19:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types
00:04:52.122    10:19:44 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]'
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister')
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@51 -- # local get_types
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@53 -- # local type_diff
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@54 -- # sort
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n'
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@54 -- # uniq -u
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@54 -- # type_diff=
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]]
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types
00:04:52.379   10:19:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:52.379   10:19:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@62 -- # return 0
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@285 -- # [[ 1 -eq 1 ]]
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@286 -- # create_bdev_subsystem_config
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@112 -- # timing_enter create_bdev_subsystem_config
00:04:52.379   10:19:44 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:52.379   10:19:44 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@114 -- # expected_notifications=()
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@114 -- # local expected_notifications
00:04:52.379   10:19:44 json_config -- json_config/json_config.sh@118 -- # expected_notifications+=($(get_notifications))
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@118 -- # get_notifications
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:52.379    10:19:44 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:52.379     10:19:44 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:04:52.379     10:19:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:04:52.379     10:19:44 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:04:52.662    10:19:44 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:04:52.662    10:19:44 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:52.662    10:19:44 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:52.662   10:19:44 json_config -- json_config/json_config.sh@120 -- # [[ 1 -eq 1 ]]
00:04:52.662   10:19:44 json_config -- json_config/json_config.sh@121 -- # local lvol_store_base_bdev=Nvme0n1
00:04:52.662   10:19:44 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_split_create Nvme0n1 2
00:04:52.662   10:19:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2
00:04:52.662  Nvme0n1p0 Nvme0n1p1
00:04:52.663   10:19:44 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_split_create Malloc0 3
00:04:52.663   10:19:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3
00:04:52.934  [2024-12-09 10:19:44.960049] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:04:52.934  [2024-12-09 10:19:44.960094] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:04:52.934  
00:04:52.934   10:19:44 json_config -- json_config/json_config.sh@125 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3
00:04:52.934   10:19:44 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3
00:04:53.192  Malloc3
00:04:53.192   10:19:45 json_config -- json_config/json_config.sh@126 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:04:53.192   10:19:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3
00:04:53.192  [2024-12-09 10:19:45.320057] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:04:53.192  [2024-12-09 10:19:45.320097] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:53.192  [2024-12-09 10:19:45.320116] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d356663b180
00:04:53.192  [2024-12-09 10:19:45.320121] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:53.192  [2024-12-09 10:19:45.320544] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:53.192  [2024-12-09 10:19:45.320573] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:04:53.192  PTBdevFromMalloc3
00:04:53.192   10:19:45 json_config -- json_config/json_config.sh@128 -- # tgt_rpc bdev_null_create Null0 32 512
00:04:53.192   10:19:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512
00:04:53.450  Null0
00:04:53.450   10:19:45 json_config -- json_config/json_config.sh@130 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0
00:04:53.450   10:19:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0
00:04:53.708  Malloc0
00:04:53.708   10:19:45 json_config -- json_config/json_config.sh@131 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1
00:04:53.708   10:19:45 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1
00:04:53.967  Malloc1
00:04:53.967   10:19:45 json_config -- json_config/json_config.sh@144 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1)
00:04:53.967   10:19:45 json_config -- json_config/json_config.sh@147 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400
00:04:53.967  102400+0 records in
00:04:53.967  102400+0 records out
00:04:53.967  104857600 bytes transferred in 0.195841 secs (535422543 bytes/sec)
00:04:53.967   10:19:46 json_config -- json_config/json_config.sh@148 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024
00:04:53.967   10:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024
00:04:54.225  aio_disk
00:04:54.225   10:19:46 json_config -- json_config/json_config.sh@149 -- # expected_notifications+=(bdev_register:aio_disk)
00:04:54.225   10:19:46 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:04:54.225   10:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test
00:04:54.482  1d5708b6-b617-11ef-9b05-d5e34e08fe3b
00:04:54.482   10:19:46 json_config -- json_config/json_config.sh@161 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)")
00:04:54.482    10:19:46 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32
00:04:54.482    10:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32
00:04:54.741    10:19:46 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32
00:04:54.741    10:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32
00:04:54.741    10:19:46 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:04:54.741    10:19:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0
00:04:54.999    10:19:47 json_config -- json_config/json_config.sh@161 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0
00:04:54.999    10:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@164 -- # [[ 0 -eq 1 ]]
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@179 -- # [[ 0 -eq 1 ]]
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@185 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@74 -- # local events_to_check
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@75 -- # local recorded_events
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@78 -- # events_to_check=($(printf '%s\n' "$@" | sort))
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@78 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@78 -- # sort
00:04:55.258   10:19:47 json_config -- json_config/json_config.sh@79 -- # recorded_events=($(get_notifications | sort))
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@79 -- # get_notifications
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@66 -- # local ev_type ev_ctx event_id
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@79 -- # sort
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.258     10:19:47 json_config -- json_config/json_config.sh@65 -- # tgt_rpc notify_get_notifications -i 0
00:04:55.258     10:19:47 json_config -- json_config/json_config.sh@65 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"'
00:04:55.258     10:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p1
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Nvme0n1p0
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc3
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.258    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:PTBdevFromMalloc3
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Null0
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p2
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p1
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc0p0
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:Malloc1
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:aio_disk
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@69 -- # echo bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # IFS=:
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@68 -- # read -r ev_type ev_ctx event_id
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@81 -- # [[ bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\d\7\2\8\0\3\f\-\b\6\1\7\-\1\1\e\f\-\9\b\0\5\-\d\5\e\3\4\e\0\8\f\e\3\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\d\8\d\f\7\7\f\-\b\6\1\7\-\1\1\e\f\-\9\b\0\5\-\d\5\e\3\4\e\0\8\f\e\3\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\d\a\9\6\e\f\9\-\b\6\1\7\-\1\1\e\f\-\9\b\0\5\-\d\5\e\3\4\e\0\8\f\e\3\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\1\d\c\4\e\6\7\5\-\b\6\1\7\-\1\1\e\f\-\9\b\0\5\-\d\5\e\3\4\e\0\8\f\e\3\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]]
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@93 -- # cat
00:04:55.259    10:19:47 json_config -- json_config/json_config.sh@93 -- # printf ' %s\n' bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk
00:04:55.259  Expected events matched:
00:04:55.259   bdev_register:1d72803f-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259   bdev_register:1d8df77f-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259   bdev_register:1da96ef9-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259   bdev_register:1dc4e675-b617-11ef-9b05-d5e34e08fe3b
00:04:55.259   bdev_register:Malloc0
00:04:55.259   bdev_register:Malloc0p0
00:04:55.259   bdev_register:Malloc0p1
00:04:55.259   bdev_register:Malloc0p2
00:04:55.259   bdev_register:Malloc1
00:04:55.259   bdev_register:Malloc3
00:04:55.259   bdev_register:Null0
00:04:55.259   bdev_register:Nvme0n1
00:04:55.259   bdev_register:Nvme0n1p0
00:04:55.259   bdev_register:Nvme0n1p1
00:04:55.259   bdev_register:PTBdevFromMalloc3
00:04:55.259   bdev_register:aio_disk
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@187 -- # timing_exit create_bdev_subsystem_config
00:04:55.259   10:19:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:55.259   10:19:47 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]]
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]]
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@297 -- # [[ 0 -eq 1 ]]
00:04:55.259   10:19:47 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target
00:04:55.259   10:19:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:55.259   10:19:47 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:55.517   10:19:47 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]]
00:04:55.517   10:19:47 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:55.517   10:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck
00:04:55.517  MallocBdevForConfigChangeCheck
00:04:55.517   10:19:47 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init
00:04:55.517   10:19:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:55.517   10:19:47 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:55.517   10:19:47 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config
00:04:55.517   10:19:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:55.774  INFO: shutting down applications...
00:04:55.774   10:19:47 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...'
00:04:55.774   10:19:47 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]]
00:04:55.774   10:19:47 json_config -- json_config/json_config.sh@375 -- # json_config_clear target
00:04:55.774   10:19:47 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]]
00:04:55.774   10:19:47 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config
00:04:56.032  [2024-12-09 10:19:48.032135] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test
00:04:56.032  Calling clear_iscsi_subsystem
00:04:56.032  Calling clear_nvmf_subsystem
00:04:56.032  Calling clear_bdev_subsystem
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@350 -- # count=100
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']'
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:56.291   10:19:48 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters
00:04:56.549   10:19:48 json_config -- json_config/json_config.sh@352 -- # break
00:04:56.549   10:19:48 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']'
00:04:56.549   10:19:48 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target
00:04:56.549   10:19:48 json_config -- json_config/common.sh@31 -- # local app=target
00:04:56.549   10:19:48 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:04:56.549   10:19:48 json_config -- json_config/common.sh@35 -- # [[ -n 47935 ]]
00:04:56.549   10:19:48 json_config -- json_config/common.sh@38 -- # kill -SIGINT 47935
00:04:56.549   10:19:48 json_config -- json_config/common.sh@40 -- # (( i = 0 ))
00:04:56.549   10:19:48 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:56.549   10:19:48 json_config -- json_config/common.sh@41 -- # kill -0 47935
00:04:56.549   10:19:48 json_config -- json_config/common.sh@45 -- # sleep 0.5
00:04:57.118   10:19:49 json_config -- json_config/common.sh@40 -- # (( i++ ))
00:04:57.118   10:19:49 json_config -- json_config/common.sh@40 -- # (( i < 30 ))
00:04:57.118   10:19:49 json_config -- json_config/common.sh@41 -- # kill -0 47935
00:04:57.118   10:19:49 json_config -- json_config/common.sh@42 -- # app_pid["$app"]=
00:04:57.118   10:19:49 json_config -- json_config/common.sh@43 -- # break
00:04:57.118   10:19:49 json_config -- json_config/common.sh@48 -- # [[ -n '' ]]
00:04:57.118  SPDK target shutdown done
00:04:57.118   10:19:49 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:04:57.118  INFO: relaunching applications...
00:04:57.118   10:19:49 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...'
00:04:57.118   10:19:49 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:57.118   10:19:49 json_config -- json_config/common.sh@9 -- # local app=target
00:04:57.118   10:19:49 json_config -- json_config/common.sh@10 -- # shift
00:04:57.118   10:19:49 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:57.118   10:19:49 json_config -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:57.118   10:19:49 json_config -- json_config/common.sh@15 -- # local app_extra_params=
00:04:57.118   10:19:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:57.118   10:19:49 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:57.118  Waiting for target to run...
00:04:57.118   10:19:49 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=48122
00:04:57.118   10:19:49 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:57.118   10:19:49 json_config -- json_config/common.sh@25 -- # waitforlisten 48122 /var/tmp/spdk_tgt.sock
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@835 -- # '[' -z 48122 ']'
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:57.118   10:19:49 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:57.118  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:57.118   10:19:49 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:57.118  [2024-12-09 10:19:49.059086] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:57.118  [2024-12-09 10:19:49.059386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:04:57.118  EAL: TSC is not safe to use in SMP mode
00:04:57.118  EAL: TSC is not invariant
00:04:57.118  [2024-12-09 10:19:49.217116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:04:57.118  [2024-12-09 10:19:49.243155] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:04:57.118  [2024-12-09 10:19:49.243239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:04:57.376  [2024-12-09 10:19:49.378739] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:04:57.376  [2024-12-09 10:19:49.378777] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1
00:04:57.376  [2024-12-09 10:19:49.386730] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:04:57.376  [2024-12-09 10:19:49.386749] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0
00:04:57.376  [2024-12-09 10:19:49.394743] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:04:57.376  [2024-12-09 10:19:49.394765] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:04:57.376  [2024-12-09 10:19:49.394771] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:04:57.376  [2024-12-09 10:19:49.402745] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:04:57.376  [2024-12-09 10:19:49.470758] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:04:57.376  [2024-12-09 10:19:49.470792] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:04:57.376  [2024-12-09 10:19:49.470799] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaa08dc3a500
00:04:57.376  [2024-12-09 10:19:49.470804] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:04:57.376  [2024-12-09 10:19:49.470856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:04:57.376  [2024-12-09 10:19:49.470861] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3
00:04:57.942   10:19:49 json_config -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:04:57.942   10:19:49 json_config -- common/autotest_common.sh@868 -- # return 0
00:04:57.942  
00:04:57.942   10:19:49 json_config -- json_config/common.sh@26 -- # echo ''
00:04:57.942   10:19:49 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]]
00:04:57.942  INFO: Checking if target configuration is the same...
00:04:57.942   10:19:49 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...'
00:04:57.942   10:19:49 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.hnEddZ /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:57.942  + '[' 2 -ne 2 ']'
00:04:57.943  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:04:57.943  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:04:57.943  + rootdir=/home/vagrant/spdk_repo/spdk
00:04:57.943  +++ basename /tmp//sh-np.hnEddZ
00:04:57.943  ++ mktemp /tmp/sh-np.hnEddZ.XXX
00:04:57.943  + tmp_file_1=/tmp/sh-np.hnEddZ.hme
00:04:57.943  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:57.943  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:57.943  + tmp_file_2=/tmp/spdk_tgt_config.json.Grz
00:04:57.943  + ret=0
00:04:57.943  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:57.943    10:19:49 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config
00:04:57.943    10:19:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:58.201  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:58.201  + diff -u /tmp/sh-np.hnEddZ.hme /tmp/spdk_tgt_config.json.Grz
00:04:58.201  + echo 'INFO: JSON config files are the same'
00:04:58.201  INFO: JSON config files are the same
00:04:58.201  + rm /tmp/sh-np.hnEddZ.hme /tmp/spdk_tgt_config.json.Grz
00:04:58.201  + exit 0
00:04:58.202   10:19:50 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]]
00:04:58.202  INFO: changing configuration and checking if this can be detected...
00:04:58.202   10:19:50 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...'
00:04:58.202   10:19:50 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:58.202   10:19:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck
00:04:58.460   10:19:50 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.tWiNl7 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:58.460  + '[' 2 -ne 2 ']'
00:04:58.460  +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh
00:04:58.460  ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../..
00:04:58.460  + rootdir=/home/vagrant/spdk_repo/spdk
00:04:58.460  +++ basename /tmp//sh-np.tWiNl7
00:04:58.460  ++ mktemp /tmp/sh-np.tWiNl7.XXX
00:04:58.460  + tmp_file_1=/tmp/sh-np.tWiNl7.iqT
00:04:58.460  +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:58.460  ++ mktemp /tmp/spdk_tgt_config.json.XXX
00:04:58.460  + tmp_file_2=/tmp/spdk_tgt_config.json.W9B
00:04:58.460  + ret=0
00:04:58.460  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:58.460    10:19:50 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config
00:04:58.460    10:19:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config
00:04:58.718  + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort
00:04:58.718  + diff -u /tmp/sh-np.tWiNl7.iqT /tmp/spdk_tgt_config.json.W9B
00:04:58.718  + ret=1
00:04:58.718  + echo '=== Start of file: /tmp/sh-np.tWiNl7.iqT ==='
00:04:58.718  + cat /tmp/sh-np.tWiNl7.iqT
00:04:58.718  + echo '=== End of file: /tmp/sh-np.tWiNl7.iqT ==='
00:04:58.718  + echo ''
00:04:58.718  + echo '=== Start of file: /tmp/spdk_tgt_config.json.W9B ==='
00:04:58.718  + cat /tmp/spdk_tgt_config.json.W9B
00:04:58.718  + echo '=== End of file: /tmp/spdk_tgt_config.json.W9B ==='
00:04:58.718  + echo ''
00:04:58.718  + rm /tmp/sh-np.tWiNl7.iqT /tmp/spdk_tgt_config.json.W9B
00:04:58.718  + exit 1
00:04:58.718  INFO: configuration change detected.
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.'
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini
00:04:58.718   10:19:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:58.718   10:19:50 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@314 -- # local ret=0
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]]
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@324 -- # [[ -n 48122 ]]
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config
00:04:58.718   10:19:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable
00:04:58.718   10:19:50 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@193 -- # [[ 1 -eq 1 ]]
00:04:58.718   10:19:50 json_config -- json_config/json_config.sh@194 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0
00:04:58.718   10:19:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0
00:04:58.975   10:19:50 json_config -- json_config/json_config.sh@195 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0
00:04:58.975   10:19:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0
00:04:58.975   10:19:51 json_config -- json_config/json_config.sh@196 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0
00:04:58.975   10:19:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0
00:04:59.232   10:19:51 json_config -- json_config/json_config.sh@197 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test
00:04:59.232   10:19:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test
00:04:59.490    10:19:51 json_config -- json_config/json_config.sh@200 -- # uname -s
00:04:59.490   10:19:51 json_config -- json_config/json_config.sh@200 -- # [[ FreeBSD = Linux ]]
00:04:59.490   10:19:51 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]]
00:04:59.490   10:19:51 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:59.490   10:19:51 json_config -- json_config/json_config.sh@330 -- # killprocess 48122
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@954 -- # '[' -z 48122 ']'
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@958 -- # kill -0 48122
00:04:59.490    10:19:51 json_config -- common/autotest_common.sh@959 -- # uname
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:04:59.490    10:19:51 json_config -- common/autotest_common.sh@962 -- # ps -c -o command 48122
00:04:59.490    10:19:51 json_config -- common/autotest_common.sh@962 -- # tail -1
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48122'
00:04:59.490  killing process with pid 48122
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@973 -- # kill 48122
00:04:59.490   10:19:51 json_config -- common/autotest_common.sh@978 -- # wait 48122
00:04:59.490   10:19:51 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json
00:04:59.491   10:19:51 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini
00:04:59.491   10:19:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable
00:04:59.491   10:19:51 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:59.491   10:19:51 json_config -- json_config/json_config.sh@335 -- # return 0
00:04:59.491  INFO: Success
00:04:59.491   10:19:51 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success'
00:04:59.491  
00:04:59.491  real	0m8.760s
00:04:59.491  user	0m13.334s
00:04:59.491  sys	0m1.299s
00:04:59.491   10:19:51 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:04:59.491   10:19:51 json_config -- common/autotest_common.sh@10 -- # set +x
00:04:59.491  ************************************
00:04:59.491  END TEST json_config
00:04:59.491  ************************************
00:04:59.749   10:19:51  -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:59.749   10:19:51  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:04:59.749   10:19:51  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:04:59.749   10:19:51  -- common/autotest_common.sh@10 -- # set +x
00:04:59.749  ************************************
00:04:59.749  START TEST json_config_extra_key
00:04:59.749  ************************************
00:04:59.749   10:19:51 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:04:59.749     10:19:51 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:04:59.749     10:19:51 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-:
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-:
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<'
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 ))
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:04:59.749     10:19:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:04:59.749    10:19:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:04:59.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.749  		--rc genhtml_branch_coverage=1
00:04:59.749  		--rc genhtml_function_coverage=1
00:04:59.749  		--rc genhtml_legend=1
00:04:59.749  		--rc geninfo_all_blocks=1
00:04:59.749  		--rc geninfo_unexecuted_blocks=1
00:04:59.749  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:59.749  		'
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:04:59.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.749  		--rc genhtml_branch_coverage=1
00:04:59.749  		--rc genhtml_function_coverage=1
00:04:59.749  		--rc genhtml_legend=1
00:04:59.749  		--rc geninfo_all_blocks=1
00:04:59.749  		--rc geninfo_unexecuted_blocks=1
00:04:59.749  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:59.749  		'
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:04:59.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.749  		--rc genhtml_branch_coverage=1
00:04:59.749  		--rc genhtml_function_coverage=1
00:04:59.749  		--rc genhtml_legend=1
00:04:59.749  		--rc geninfo_all_blocks=1
00:04:59.749  		--rc geninfo_unexecuted_blocks=1
00:04:59.749  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:59.749  		'
00:04:59.749    10:19:51 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:04:59.749  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:04:59.749  		--rc genhtml_branch_coverage=1
00:04:59.749  		--rc genhtml_function_coverage=1
00:04:59.749  		--rc genhtml_legend=1
00:04:59.749  		--rc geninfo_all_blocks=1
00:04:59.749  		--rc geninfo_unexecuted_blocks=1
00:04:59.749  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:04:59.749  		'
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh
00:04:59.749     10:19:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s
00:04:59.749    10:19:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]]
00:04:59.749    10:19:51 json_config_extra_key -- nvmf/common.sh@7 -- # return 0
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='')
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock')
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024')
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json')
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR
00:04:59.749  INFO: launching applications...
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...'
00:04:59.749   10:19:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@10 -- # shift
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]]
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]]
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params=
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]]
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=48255
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json
00:04:59.749   10:19:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...'
00:04:59.749  Waiting for target to run...
00:04:59.750   10:19:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 48255 /var/tmp/spdk_tgt.sock
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 48255 ']'
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100
00:04:59.750  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...'
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable
00:04:59.750   10:19:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:04:59.750  [2024-12-09 10:19:51.878800] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:04:59.750  [2024-12-09 10:19:51.878935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:00.101  EAL: TSC is not safe to use in SMP mode
00:05:00.101  EAL: TSC is not invariant
00:05:00.101  [2024-12-09 10:19:52.033740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:00.101  [2024-12-09 10:19:52.054895] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:00.101  [2024-12-09 10:19:52.054965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:00.679   10:19:52 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:00.679   10:19:52 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0
00:05:00.679  
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@26 -- # echo ''
00:05:00.679  INFO: shutting down applications...
00:05:00.679   10:19:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...'
00:05:00.679   10:19:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]]
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 48255 ]]
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 48255
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 ))
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 48255
00:05:00.679   10:19:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ ))
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 ))
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 48255
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]=
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@43 -- # break
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]]
00:05:01.247  SPDK target shutdown done
00:05:01.247  Success
00:05:01.247   10:19:53 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done'
00:05:01.247   10:19:53 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success
00:05:01.247  
00:05:01.247  real	0m1.556s
00:05:01.247  user	0m1.139s
00:05:01.247  sys	0m0.331s
00:05:01.247   10:19:53 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:01.247   10:19:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x
00:05:01.247  ************************************
00:05:01.247  END TEST json_config_extra_key
00:05:01.247  ************************************
00:05:01.247   10:19:53  -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:01.247   10:19:53  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:01.247   10:19:53  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:01.247   10:19:53  -- common/autotest_common.sh@10 -- # set +x
00:05:01.247  ************************************
00:05:01.247  START TEST alias_rpc
00:05:01.247  ************************************
00:05:01.247   10:19:53 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh
00:05:01.505  * Looking for test storage...
00:05:01.505  * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:01.505     10:19:53 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:05:01.505     10:19:53 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@344 -- # case "$op" in
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@345 -- # : 1
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@365 -- # decimal 1
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@353 -- # local d=1
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@355 -- # echo 1
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@366 -- # decimal 2
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@353 -- # local d=2
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:01.505     10:19:53 alias_rpc -- scripts/common.sh@355 -- # echo 2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:01.505    10:19:53 alias_rpc -- scripts/common.sh@368 -- # return 0
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:01.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.505  		--rc genhtml_branch_coverage=1
00:05:01.505  		--rc genhtml_function_coverage=1
00:05:01.505  		--rc genhtml_legend=1
00:05:01.505  		--rc geninfo_all_blocks=1
00:05:01.505  		--rc geninfo_unexecuted_blocks=1
00:05:01.505  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:01.505  		'
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:01.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.505  		--rc genhtml_branch_coverage=1
00:05:01.505  		--rc genhtml_function_coverage=1
00:05:01.505  		--rc genhtml_legend=1
00:05:01.505  		--rc geninfo_all_blocks=1
00:05:01.505  		--rc geninfo_unexecuted_blocks=1
00:05:01.505  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:01.505  		'
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:01.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.505  		--rc genhtml_branch_coverage=1
00:05:01.505  		--rc genhtml_function_coverage=1
00:05:01.505  		--rc genhtml_legend=1
00:05:01.505  		--rc geninfo_all_blocks=1
00:05:01.505  		--rc geninfo_unexecuted_blocks=1
00:05:01.505  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:01.505  		'
00:05:01.505    10:19:53 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:01.505  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:01.505  		--rc genhtml_branch_coverage=1
00:05:01.505  		--rc genhtml_function_coverage=1
00:05:01.505  		--rc genhtml_legend=1
00:05:01.505  		--rc geninfo_all_blocks=1
00:05:01.506  		--rc geninfo_unexecuted_blocks=1
00:05:01.506  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:01.506  		'
00:05:01.506   10:19:53 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR
00:05:01.506   10:19:53 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=48321
00:05:01.506   10:19:53 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 48321
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 48321 ']'
00:05:01.506   10:19:53 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:01.506  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:01.506   10:19:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:01.506  [2024-12-09 10:19:53.518094] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:01.506  [2024-12-09 10:19:53.518255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:01.763  EAL: TSC is not safe to use in SMP mode
00:05:01.763  EAL: TSC is not invariant
00:05:01.763  [2024-12-09 10:19:53.776181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:01.763  [2024-12-09 10:19:53.798097] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:01.763  [2024-12-09 10:19:53.798135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:02.328   10:19:54 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:02.328   10:19:54 alias_rpc -- common/autotest_common.sh@868 -- # return 0
00:05:02.328   10:19:54 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i
00:05:02.586   10:19:54 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 48321
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 48321 ']'
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 48321
00:05:02.586    10:19:54 alias_rpc -- common/autotest_common.sh@959 -- # uname
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:02.586    10:19:54 alias_rpc -- common/autotest_common.sh@962 -- # ps -c -o command 48321
00:05:02.586    10:19:54 alias_rpc -- common/autotest_common.sh@962 -- # tail -1
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:05:02.586  killing process with pid 48321
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48321'
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@973 -- # kill 48321
00:05:02.586   10:19:54 alias_rpc -- common/autotest_common.sh@978 -- # wait 48321
00:05:02.846  
00:05:02.846  real	0m1.443s
00:05:02.846  user	0m1.647s
00:05:02.846  sys	0m0.407s
00:05:02.846   10:19:54 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:02.846   10:19:54 alias_rpc -- common/autotest_common.sh@10 -- # set +x
00:05:02.846  ************************************
00:05:02.846  END TEST alias_rpc
00:05:02.846  ************************************
00:05:02.846   10:19:54  -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]]
00:05:02.846   10:19:54  -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:05:02.846   10:19:54  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:02.846   10:19:54  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:02.846   10:19:54  -- common/autotest_common.sh@10 -- # set +x
00:05:02.846  ************************************
00:05:02.846  START TEST spdkcli_tcp
00:05:02.846  ************************************
00:05:02.846   10:19:54 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh
00:05:02.846  * Looking for test storage...
00:05:02.846  * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli
00:05:02.846    10:19:54 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:02.846     10:19:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version
00:05:02.846     10:19:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:02.846    10:19:54 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-:
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-:
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<'
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@345 -- # : 1
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:02.846     10:19:54 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1
00:05:02.846     10:19:54 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1
00:05:02.846     10:19:54 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:02.846     10:19:54 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1
00:05:02.846    10:19:54 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1
00:05:02.846     10:19:54 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2
00:05:02.846     10:19:55 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2
00:05:02.846     10:19:55 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:02.846     10:19:55 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2
00:05:03.107    10:19:55 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2
00:05:03.107    10:19:55 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:03.107    10:19:55 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:03.107    10:19:55 spdkcli_tcp -- scripts/common.sh@368 -- # return 0
00:05:03.107    10:19:55 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:03.107    10:19:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:03.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:03.107  		--rc genhtml_branch_coverage=1
00:05:03.107  		--rc genhtml_function_coverage=1
00:05:03.107  		--rc genhtml_legend=1
00:05:03.107  		--rc geninfo_all_blocks=1
00:05:03.107  		--rc geninfo_unexecuted_blocks=1
00:05:03.107  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:03.107  		'
00:05:03.107    10:19:55 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:03.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:03.107  		--rc genhtml_branch_coverage=1
00:05:03.107  		--rc genhtml_function_coverage=1
00:05:03.107  		--rc genhtml_legend=1
00:05:03.107  		--rc geninfo_all_blocks=1
00:05:03.107  		--rc geninfo_unexecuted_blocks=1
00:05:03.107  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:03.107  		'
00:05:03.107    10:19:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:03.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:03.107  		--rc genhtml_branch_coverage=1
00:05:03.107  		--rc genhtml_function_coverage=1
00:05:03.107  		--rc genhtml_legend=1
00:05:03.107  		--rc geninfo_all_blocks=1
00:05:03.107  		--rc geninfo_unexecuted_blocks=1
00:05:03.107  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:03.107  		'
00:05:03.107    10:19:55 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:03.107  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:03.107  		--rc genhtml_branch_coverage=1
00:05:03.107  		--rc genhtml_function_coverage=1
00:05:03.107  		--rc genhtml_legend=1
00:05:03.107  		--rc geninfo_all_blocks=1
00:05:03.107  		--rc geninfo_unexecuted_blocks=1
00:05:03.107  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:03.107  		'
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh
00:05:03.107    10:19:55 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py
00:05:03.107    10:19:55 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=48390
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 48390
00:05:03.107   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 48390 ']'
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:03.107  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:03.107   10:19:55 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:03.107  [2024-12-09 10:19:55.014875] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:03.107  [2024-12-09 10:19:55.015014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:03.365  EAL: TSC is not safe to use in SMP mode
00:05:03.365  EAL: TSC is not invariant
00:05:03.365  [2024-12-09 10:19:55.326204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:03.365  [2024-12-09 10:19:55.350669] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:03.365  [2024-12-09 10:19:55.350697] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:03.365  [2024-12-09 10:19:55.350742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:03.365  [2024-12-09 10:19:55.350739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:03.932   10:19:55 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:03.932   10:19:55 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0
00:05:03.932   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=48398
00:05:03.932   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock
00:05:03.932   10:19:55 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods
00:05:03.932  [
00:05:03.932    "spdk_get_version",
00:05:03.932    "rpc_get_methods",
00:05:03.932    "env_dpdk_get_mem_stats",
00:05:03.932    "trace_get_info",
00:05:03.932    "trace_get_tpoint_group_mask",
00:05:03.932    "trace_disable_tpoint_group",
00:05:03.932    "trace_enable_tpoint_group",
00:05:03.932    "trace_clear_tpoint_mask",
00:05:03.932    "trace_set_tpoint_mask",
00:05:03.932    "notify_get_notifications",
00:05:03.932    "notify_get_types",
00:05:03.932    "accel_get_stats",
00:05:03.932    "accel_set_options",
00:05:03.932    "accel_set_driver",
00:05:03.932    "accel_crypto_key_destroy",
00:05:03.932    "accel_crypto_keys_get",
00:05:03.932    "accel_crypto_key_create",
00:05:03.932    "accel_assign_opc",
00:05:03.932    "accel_get_module_info",
00:05:03.932    "accel_get_opc_assignments",
00:05:03.932    "bdev_get_histogram",
00:05:03.932    "bdev_enable_histogram",
00:05:03.932    "bdev_set_qos_limit",
00:05:03.932    "bdev_set_qd_sampling_period",
00:05:03.932    "bdev_get_bdevs",
00:05:03.932    "bdev_reset_iostat",
00:05:03.932    "bdev_get_iostat",
00:05:03.932    "bdev_examine",
00:05:03.932    "bdev_wait_for_examine",
00:05:03.932    "bdev_set_options",
00:05:03.932    "keyring_get_keys",
00:05:03.932    "framework_get_pci_devices",
00:05:03.932    "framework_get_config",
00:05:03.932    "framework_get_subsystems",
00:05:03.932    "sock_get_default_impl",
00:05:03.932    "sock_set_default_impl",
00:05:03.932    "sock_impl_set_options",
00:05:03.932    "sock_impl_get_options",
00:05:03.932    "thread_set_cpumask",
00:05:03.932    "scheduler_set_options",
00:05:03.932    "framework_get_governor",
00:05:03.932    "framework_get_scheduler",
00:05:03.932    "framework_set_scheduler",
00:05:03.932    "framework_get_reactors",
00:05:03.932    "thread_get_io_channels",
00:05:03.932    "thread_get_pollers",
00:05:03.932    "thread_get_stats",
00:05:03.932    "framework_monitor_context_switch",
00:05:03.932    "spdk_kill_instance",
00:05:03.932    "log_enable_timestamps",
00:05:03.932    "log_get_flags",
00:05:03.932    "log_clear_flag",
00:05:03.932    "log_set_flag",
00:05:03.932    "log_get_level",
00:05:03.932    "log_set_level",
00:05:03.932    "log_get_print_level",
00:05:03.932    "log_set_print_level",
00:05:03.932    "framework_enable_cpumask_locks",
00:05:03.932    "framework_disable_cpumask_locks",
00:05:03.932    "framework_wait_init",
00:05:03.932    "framework_start_init",
00:05:03.932    "iobuf_get_stats",
00:05:03.932    "iobuf_set_options",
00:05:03.932    "vmd_rescan",
00:05:03.932    "vmd_remove_device",
00:05:03.932    "vmd_enable",
00:05:03.932    "nvmf_stop_mdns_prr",
00:05:03.932    "nvmf_publish_mdns_prr",
00:05:03.932    "nvmf_subsystem_get_listeners",
00:05:03.932    "nvmf_subsystem_get_qpairs",
00:05:03.932    "nvmf_subsystem_get_controllers",
00:05:03.932    "nvmf_get_stats",
00:05:03.932    "nvmf_get_transports",
00:05:03.932    "nvmf_create_transport",
00:05:03.932    "nvmf_get_targets",
00:05:03.932    "nvmf_delete_target",
00:05:03.932    "nvmf_create_target",
00:05:03.932    "nvmf_subsystem_allow_any_host",
00:05:03.932    "nvmf_subsystem_set_keys",
00:05:03.932    "nvmf_subsystem_remove_host",
00:05:03.932    "nvmf_subsystem_add_host",
00:05:03.932    "nvmf_ns_remove_host",
00:05:03.932    "nvmf_ns_add_host",
00:05:03.932    "nvmf_subsystem_remove_ns",
00:05:03.932    "nvmf_subsystem_set_ns_ana_group",
00:05:03.932    "nvmf_subsystem_add_ns",
00:05:03.932    "nvmf_subsystem_listener_set_ana_state",
00:05:03.932    "nvmf_discovery_get_referrals",
00:05:03.932    "nvmf_discovery_remove_referral",
00:05:03.932    "nvmf_discovery_add_referral",
00:05:03.932    "nvmf_subsystem_remove_listener",
00:05:03.932    "nvmf_subsystem_add_listener",
00:05:03.932    "nvmf_delete_subsystem",
00:05:03.932    "nvmf_create_subsystem",
00:05:03.932    "nvmf_get_subsystems",
00:05:03.932    "nvmf_set_crdt",
00:05:03.932    "nvmf_set_config",
00:05:03.932    "nvmf_set_max_subsystems",
00:05:03.932    "scsi_get_devices",
00:05:03.932    "iscsi_get_histogram",
00:05:03.932    "iscsi_enable_histogram",
00:05:03.932    "iscsi_set_options",
00:05:03.932    "iscsi_get_auth_groups",
00:05:03.932    "iscsi_auth_group_remove_secret",
00:05:03.932    "iscsi_auth_group_add_secret",
00:05:03.932    "iscsi_delete_auth_group",
00:05:03.932    "iscsi_create_auth_group",
00:05:03.932    "iscsi_set_discovery_auth",
00:05:03.932    "iscsi_get_options",
00:05:03.932    "iscsi_target_node_request_logout",
00:05:03.932    "iscsi_target_node_set_redirect",
00:05:03.932    "iscsi_target_node_set_auth",
00:05:03.932    "iscsi_target_node_add_lun",
00:05:03.932    "iscsi_get_stats",
00:05:03.932    "iscsi_get_connections",
00:05:03.932    "iscsi_portal_group_set_auth",
00:05:03.932    "iscsi_start_portal_group",
00:05:03.932    "iscsi_delete_portal_group",
00:05:03.932    "iscsi_create_portal_group",
00:05:03.932    "iscsi_get_portal_groups",
00:05:03.932    "iscsi_delete_target_node",
00:05:03.932    "iscsi_target_node_remove_pg_ig_maps",
00:05:03.932    "iscsi_target_node_add_pg_ig_maps",
00:05:03.932    "iscsi_create_target_node",
00:05:03.932    "iscsi_get_target_nodes",
00:05:03.932    "iscsi_delete_initiator_group",
00:05:03.932    "iscsi_initiator_group_remove_initiators",
00:05:03.932    "iscsi_initiator_group_add_initiators",
00:05:03.932    "iscsi_create_initiator_group",
00:05:03.932    "iscsi_get_initiator_groups",
00:05:03.932    "keyring_file_remove_key",
00:05:03.932    "keyring_file_add_key",
00:05:03.933    "iaa_scan_accel_module",
00:05:03.933    "dsa_scan_accel_module",
00:05:03.933    "ioat_scan_accel_module",
00:05:03.933    "accel_error_inject_error",
00:05:03.933    "bdev_aio_delete",
00:05:03.933    "bdev_aio_rescan",
00:05:03.933    "bdev_aio_create",
00:05:03.933    "blobfs_create",
00:05:03.933    "blobfs_detect",
00:05:03.933    "blobfs_set_cache_size",
00:05:03.933    "bdev_zone_block_delete",
00:05:03.933    "bdev_zone_block_create",
00:05:03.933    "bdev_delay_delete",
00:05:03.933    "bdev_delay_create",
00:05:03.933    "bdev_delay_update_latency",
00:05:03.933    "bdev_split_delete",
00:05:03.933    "bdev_split_create",
00:05:03.933    "bdev_error_inject_error",
00:05:03.933    "bdev_error_delete",
00:05:03.933    "bdev_error_create",
00:05:03.933    "bdev_raid_set_options",
00:05:03.933    "bdev_raid_remove_base_bdev",
00:05:03.933    "bdev_raid_add_base_bdev",
00:05:03.933    "bdev_raid_delete",
00:05:03.933    "bdev_raid_create",
00:05:03.933    "bdev_raid_get_bdevs",
00:05:03.933    "bdev_lvol_set_parent_bdev",
00:05:03.933    "bdev_lvol_set_parent",
00:05:03.933    "bdev_lvol_check_shallow_copy",
00:05:03.933    "bdev_lvol_start_shallow_copy",
00:05:03.933    "bdev_lvol_grow_lvstore",
00:05:03.933    "bdev_lvol_get_lvols",
00:05:03.933    "bdev_lvol_get_lvstores",
00:05:03.933    "bdev_lvol_delete",
00:05:03.933    "bdev_lvol_set_read_only",
00:05:03.933    "bdev_lvol_resize",
00:05:03.933    "bdev_lvol_decouple_parent",
00:05:03.933    "bdev_lvol_inflate",
00:05:03.933    "bdev_lvol_rename",
00:05:03.933    "bdev_lvol_clone_bdev",
00:05:03.933    "bdev_lvol_clone",
00:05:03.933    "bdev_lvol_snapshot",
00:05:03.933    "bdev_lvol_create",
00:05:03.933    "bdev_lvol_delete_lvstore",
00:05:03.933    "bdev_lvol_rename_lvstore",
00:05:03.933    "bdev_lvol_create_lvstore",
00:05:03.933    "bdev_passthru_delete",
00:05:03.933    "bdev_passthru_create",
00:05:03.933    "bdev_nvme_send_cmd",
00:05:03.933    "bdev_nvme_set_keys",
00:05:03.933    "bdev_nvme_get_path_iostat",
00:05:03.933    "bdev_nvme_get_mdns_discovery_info",
00:05:03.933    "bdev_nvme_stop_mdns_discovery",
00:05:03.933    "bdev_nvme_start_mdns_discovery",
00:05:03.933    "bdev_nvme_set_multipath_policy",
00:05:03.933    "bdev_nvme_set_preferred_path",
00:05:03.933    "bdev_nvme_get_io_paths",
00:05:03.933    "bdev_nvme_remove_error_injection",
00:05:03.933    "bdev_nvme_add_error_injection",
00:05:03.933    "bdev_nvme_get_discovery_info",
00:05:03.933    "bdev_nvme_stop_discovery",
00:05:03.933    "bdev_nvme_start_discovery",
00:05:03.933    "bdev_nvme_get_controller_health_info",
00:05:03.933    "bdev_nvme_disable_controller",
00:05:03.933    "bdev_nvme_enable_controller",
00:05:03.933    "bdev_nvme_reset_controller",
00:05:03.933    "bdev_nvme_get_transport_statistics",
00:05:03.933    "bdev_nvme_apply_firmware",
00:05:03.933    "bdev_nvme_detach_controller",
00:05:03.933    "bdev_nvme_get_controllers",
00:05:03.933    "bdev_nvme_attach_controller",
00:05:03.933    "bdev_nvme_set_hotplug",
00:05:03.933    "bdev_nvme_set_options",
00:05:03.933    "bdev_null_resize",
00:05:03.933    "bdev_null_delete",
00:05:03.933    "bdev_null_create",
00:05:03.933    "bdev_malloc_delete",
00:05:03.933    "bdev_malloc_create"
00:05:03.933  ]
00:05:03.933   10:19:56 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:03.933   10:19:56 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT
00:05:03.933   10:19:56 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 48390
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 48390 ']'
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 48390
00:05:03.933    10:19:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:03.933    10:19:56 spdkcli_tcp -- common/autotest_common.sh@962 -- # ps -c -o command 48390
00:05:03.933    10:19:56 spdkcli_tcp -- common/autotest_common.sh@962 -- # tail -1
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:05:03.933  killing process with pid 48390
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48390'
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 48390
00:05:03.933   10:19:56 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 48390
00:05:04.194  
00:05:04.194  real	0m1.371s
00:05:04.194  user	0m2.238s
00:05:04.194  sys	0m0.523s
00:05:04.194  ************************************
00:05:04.194  END TEST spdkcli_tcp
00:05:04.194  ************************************
00:05:04.194   10:19:56 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:04.194   10:19:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x
00:05:04.194   10:19:56  -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:04.194   10:19:56  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:04.194   10:19:56  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:04.194   10:19:56  -- common/autotest_common.sh@10 -- # set +x
00:05:04.194  ************************************
00:05:04.194  START TEST dpdk_mem_utility
00:05:04.194  ************************************
00:05:04.194   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh
00:05:04.455  * Looking for test storage...
00:05:04.455  * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility
00:05:04.455    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:04.455     10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version
00:05:04.455     10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:04.455    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-:
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-:
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<'
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1
00:05:04.455    10:19:56 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2
00:05:04.455     10:19:56 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2
00:05:04.456     10:19:56 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:04.456     10:19:56 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2
00:05:04.456    10:19:56 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2
00:05:04.456    10:19:56 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:04.456    10:19:56 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:04.456    10:19:56 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0
00:05:04.456    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:04.456    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:04.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.456  		--rc genhtml_branch_coverage=1
00:05:04.456  		--rc genhtml_function_coverage=1
00:05:04.456  		--rc genhtml_legend=1
00:05:04.456  		--rc geninfo_all_blocks=1
00:05:04.456  		--rc geninfo_unexecuted_blocks=1
00:05:04.456  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:04.456  		'
00:05:04.456    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:04.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.456  		--rc genhtml_branch_coverage=1
00:05:04.456  		--rc genhtml_function_coverage=1
00:05:04.456  		--rc genhtml_legend=1
00:05:04.456  		--rc geninfo_all_blocks=1
00:05:04.456  		--rc geninfo_unexecuted_blocks=1
00:05:04.456  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:04.456  		'
00:05:04.456    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:04.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.456  		--rc genhtml_branch_coverage=1
00:05:04.456  		--rc genhtml_function_coverage=1
00:05:04.456  		--rc genhtml_legend=1
00:05:04.456  		--rc geninfo_all_blocks=1
00:05:04.456  		--rc geninfo_unexecuted_blocks=1
00:05:04.456  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:04.456  		'
00:05:04.456    10:19:56 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:04.456  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:04.456  		--rc genhtml_branch_coverage=1
00:05:04.456  		--rc genhtml_function_coverage=1
00:05:04.456  		--rc genhtml_legend=1
00:05:04.456  		--rc geninfo_all_blocks=1
00:05:04.456  		--rc geninfo_unexecuted_blocks=1
00:05:04.456  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:04.456  		'
00:05:04.456   10:19:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:05:04.456   10:19:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=48477
00:05:04.456   10:19:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 48477
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 48477 ']'
00:05:04.456   10:19:56 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:04.456  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:04.456   10:19:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:04.456  [2024-12-09 10:19:56.431900] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:04.456  [2024-12-09 10:19:56.432055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:04.715  EAL: TSC is not safe to use in SMP mode
00:05:04.715  EAL: TSC is not invariant
00:05:04.715  [2024-12-09 10:19:56.731251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:04.715  [2024-12-09 10:19:56.757388] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:04.715  [2024-12-09 10:19:56.757423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:05.283   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:05.283   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0
00:05:05.283   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT
00:05:05.283   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats
00:05:05.283   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:05.283   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:05.283  {
00:05:05.283  "filename": "/tmp/spdk_mem_dump.txt"
00:05:05.283  }
00:05:05.283   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:05.283   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py
00:05:05.283  DPDK memory size 2048.000000 MiB in 1 heap(s)
00:05:05.283  1 heaps totaling size 2048.000000 MiB
00:05:05.283    size: 2048.000000 MiB heap id: 0
00:05:05.283  end heaps----------
00:05:05.283  8 mempools totaling size 553.688660 MiB
00:05:05.283    size:  212.271240 MiB name: PDU_immediate_data_Pool
00:05:05.283    size:  153.489014 MiB name: PDU_data_out_Pool
00:05:05.283    size:   92.500549 MiB name: bdev_io_48477
00:05:05.283    size:   50.000549 MiB name: msgpool_48477
00:05:05.283    size:   21.758911 MiB name: PDU_Pool
00:05:05.283    size:   19.508911 MiB name: SCSI_TASK_Pool
00:05:05.283    size:    4.133362 MiB name: evtpool_48477
00:05:05.283    size:    0.026123 MiB name: Session_Pool
00:05:05.283  end mempools-------
00:05:05.283  6 memzones totaling size 4.142822 MiB
00:05:05.283    size:    1.000366 MiB name: RG_ring_0_48477
00:05:05.283    size:    1.000366 MiB name: RG_ring_1_48477
00:05:05.283    size:    1.000366 MiB name: RG_ring_4_48477
00:05:05.283    size:    1.000366 MiB name: RG_ring_5_48477
00:05:05.283    size:    0.125366 MiB name: RG_ring_2_48477
00:05:05.283    size:    0.015991 MiB name: RG_ring_3_48477
00:05:05.283  end memzones-------
00:05:05.283   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0
00:05:05.283  heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 2
00:05:05.283    list of free elements. size: 1292.946960 MiB
00:05:05.283      element at address: 0x1060000000 with size: 1292.945618 MiB
00:05:05.283      element at address: 0x10dc796000 with size:    0.001343 MiB
00:05:05.283    list of standard malloc elements. size: 197.217896 MiB
00:05:05.283      element at address: 0x10d0390f80 with size:  132.000122 MiB
00:05:05.283      element at address: 0x10d8795f80 with size:   64.000122 MiB
00:05:05.283      element at address: 0x10ca5fff80 with size:    1.000122 MiB
00:05:05.283      element at address: 0x10dffd9f00 with size:    0.140747 MiB
00:05:05.283      element at address: 0x10ca700c80 with size:    0.062622 MiB
00:05:05.283      element at address: 0x10dfffdf80 with size:    0.007935 MiB
00:05:05.283      element at address: 0x10ca700b40 with size:    0.000305 MiB
00:05:05.283      element at address: 0x10ca700000 with size:    0.000244 MiB
00:05:05.283      element at address: 0x10ca700240 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700300 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca7003c0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700480 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700540 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700600 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca7006c0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700780 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700840 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700900 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca7009c0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10ca700a80 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10d8791a00 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10d8791ac0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10d8791cc0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc796580 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc796640 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc796700 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc7967c0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc796880 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc796940 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc7b6c00 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc8b6ec0 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc8b6f80 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc9b7240 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dc9b7300 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dfbb7640 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dfbb7840 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dfbb7900 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dfed7c40 with size:    0.000183 MiB
00:05:05.283      element at address: 0x10dffd9e40 with size:    0.000183 MiB
00:05:05.283    list of memzone associated elements. size: 557.835144 MiB
00:05:05.283      element at address: 0x10bbaf7f00 with size:  211.013000 MiB
00:05:05.283        associated memzone info: size:  211.012878 MiB name: MP_PDU_immediate_data_Pool_0
00:05:05.283      element at address: 0x10b217aa80 with size:  152.449524 MiB
00:05:05.283        associated memzone info: size:  152.449402 MiB name: MP_PDU_data_out_Pool_0
00:05:05.283      element at address: 0x10ca710d00 with size:   92.000122 MiB
00:05:05.283        associated memzone info: size:   92.000000 MiB name: MP_bdev_io_48477_0
00:05:05.283      element at address: 0x10dc9b73c0 with size:   48.000122 MiB
00:05:05.283        associated memzone info: size:   48.000000 MiB name: MP_msgpool_48477_0
00:05:05.283      element at address: 0x10c8f3d780 with size:   20.250671 MiB
00:05:05.283        associated memzone info: size:   20.250549 MiB name: MP_PDU_Pool_0
00:05:05.283      element at address: 0x10b0df2340 with size:   18.000671 MiB
00:05:05.283        associated memzone info: size:   18.000549 MiB name: MP_SCSI_TASK_Pool_0
00:05:05.283      element at address: 0x10dfbb79c0 with size:    3.000122 MiB
00:05:05.283        associated memzone info: size:    3.000000 MiB name: MP_evtpool_48477_0
00:05:05.283      element at address: 0x10df9b7440 with size:    2.000488 MiB
00:05:05.283        associated memzone info: size:    2.000366 MiB name: RG_MP_msgpool_48477
00:05:05.283      element at address: 0x10dfed7d00 with size:    1.008118 MiB
00:05:05.283        associated memzone info: size:    1.007996 MiB name: MP_evtpool_48477
00:05:05.283      element at address: 0x10ca3fdc40 with size:    1.008118 MiB
00:05:05.283        associated memzone info: size:    1.007996 MiB name: MP_PDU_Pool
00:05:05.283      element at address: 0x10c8e3b640 with size:    1.008118 MiB
00:05:05.283        associated memzone info: size:    1.007996 MiB name: MP_PDU_immediate_data_Pool
00:05:05.283      element at address: 0x10bb9f5dc0 with size:    1.008118 MiB
00:05:05.283        associated memzone info: size:    1.007996 MiB name: MP_PDU_data_out_Pool
00:05:05.283      element at address: 0x10b2072800 with size:    1.008118 MiB
00:05:05.283        associated memzone info: size:    1.007996 MiB name: MP_SCSI_TASK_Pool
00:05:05.283      element at address: 0x10dc8b7040 with size:    1.000488 MiB
00:05:05.283        associated memzone info: size:    1.000366 MiB name: RG_ring_0_48477
00:05:05.283      element at address: 0x10dc7b6cc0 with size:    1.000488 MiB
00:05:05.283        associated memzone info: size:    1.000366 MiB name: RG_ring_1_48477
00:05:05.283      element at address: 0x10ca4ffd80 with size:    1.000488 MiB
00:05:05.283        associated memzone info: size:    1.000366 MiB name: RG_ring_4_48477
00:05:05.283      element at address: 0x10b0cf2140 with size:    1.000488 MiB
00:05:05.283        associated memzone info: size:    1.000366 MiB name: RG_ring_5_48477
00:05:05.283      element at address: 0x10d0310d80 with size:    0.500488 MiB
00:05:05.283        associated memzone info: size:    0.500366 MiB name: RG_MP_bdev_io_48477
00:05:05.283      element at address: 0x10ca37da40 with size:    0.500488 MiB
00:05:05.283        associated memzone info: size:    0.500366 MiB name: RG_MP_PDU_Pool
00:05:05.283      element at address: 0x10b1ff2600 with size:    0.500488 MiB
00:05:05.283        associated memzone info: size:    0.500366 MiB name: RG_MP_SCSI_TASK_Pool
00:05:05.283      element at address: 0x10c8dfb440 with size:    0.250488 MiB
00:05:05.283        associated memzone info: size:    0.250366 MiB name: RG_MP_PDU_immediate_data_Pool
00:05:05.283      element at address: 0x10dfeb7a40 with size:    0.125488 MiB
00:05:05.283        associated memzone info: size:    0.125366 MiB name: RG_MP_evtpool_48477
00:05:05.283      element at address: 0x10dc796a00 with size:    0.125488 MiB
00:05:05.283        associated memzone info: size:    0.125366 MiB name: RG_ring_2_48477
00:05:05.283      element at address: 0x10bb9edbc0 with size:    0.031738 MiB
00:05:05.283        associated memzone info: size:    0.031616 MiB name: RG_MP_PDU_data_out_Pool
00:05:05.283      element at address: 0x10b2174940 with size:    0.023743 MiB
00:05:05.283        associated memzone info: size:    0.023621 MiB name: MP_Session_Pool_0
00:05:05.283      element at address: 0x10d8791d80 with size:    0.016113 MiB
00:05:05.283        associated memzone info: size:    0.015991 MiB name: RG_ring_3_48477
00:05:05.283      element at address: 0x10d8791000 with size:    0.002441 MiB
00:05:05.283        associated memzone info: size:    0.002319 MiB name: RG_MP_Session_Pool
00:05:05.283      element at address: 0x10dfbb7700 with size:    0.000305 MiB
00:05:05.283        associated memzone info: size:    0.000183 MiB name: MP_msgpool_48477
00:05:05.283      element at address: 0x10d8791b80 with size:    0.000305 MiB
00:05:05.284        associated memzone info: size:    0.000183 MiB name: MP_bdev_io_48477
00:05:05.284      element at address: 0x10ca700100 with size:    0.000305 MiB
00:05:05.284        associated memzone info: size:    0.000183 MiB name: MP_Session_Pool
00:05:05.284   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT
00:05:05.284   10:19:57 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 48477
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 48477 ']'
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 48477
00:05:05.284    10:19:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:05.284    10:19:57 dpdk_mem_utility -- common/autotest_common.sh@962 -- # tail -1
00:05:05.284    10:19:57 dpdk_mem_utility -- common/autotest_common.sh@962 -- # ps -c -o command 48477
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:05:05.284  killing process with pid 48477
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48477'
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 48477
00:05:05.284   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 48477
00:05:05.543  
00:05:05.543  real	0m1.255s
00:05:05.543  user	0m1.204s
00:05:05.543  sys	0m0.497s
00:05:05.543   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:05.543  ************************************
00:05:05.543  END TEST dpdk_mem_utility
00:05:05.543  ************************************
00:05:05.543   10:19:57 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x
00:05:05.543   10:19:57  -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:05:05.543   10:19:57  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:05.543   10:19:57  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.543   10:19:57  -- common/autotest_common.sh@10 -- # set +x
00:05:05.543  ************************************
00:05:05.543  START TEST event
00:05:05.543  ************************************
00:05:05.543   10:19:57 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh
00:05:05.543  * Looking for test storage...
00:05:05.543  * Found test storage at /home/vagrant/spdk_repo/spdk/test/event
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:05.543     10:19:57 event -- common/autotest_common.sh@1711 -- # lcov --version
00:05:05.543     10:19:57 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:05.543    10:19:57 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:05.543    10:19:57 event -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:05.543    10:19:57 event -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:05.543    10:19:57 event -- scripts/common.sh@336 -- # IFS=.-:
00:05:05.543    10:19:57 event -- scripts/common.sh@336 -- # read -ra ver1
00:05:05.543    10:19:57 event -- scripts/common.sh@337 -- # IFS=.-:
00:05:05.543    10:19:57 event -- scripts/common.sh@337 -- # read -ra ver2
00:05:05.543    10:19:57 event -- scripts/common.sh@338 -- # local 'op=<'
00:05:05.543    10:19:57 event -- scripts/common.sh@340 -- # ver1_l=2
00:05:05.543    10:19:57 event -- scripts/common.sh@341 -- # ver2_l=1
00:05:05.543    10:19:57 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:05.543    10:19:57 event -- scripts/common.sh@344 -- # case "$op" in
00:05:05.543    10:19:57 event -- scripts/common.sh@345 -- # : 1
00:05:05.543    10:19:57 event -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:05.543    10:19:57 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:05.543     10:19:57 event -- scripts/common.sh@365 -- # decimal 1
00:05:05.543     10:19:57 event -- scripts/common.sh@353 -- # local d=1
00:05:05.543     10:19:57 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:05.543     10:19:57 event -- scripts/common.sh@355 -- # echo 1
00:05:05.543    10:19:57 event -- scripts/common.sh@365 -- # ver1[v]=1
00:05:05.543     10:19:57 event -- scripts/common.sh@366 -- # decimal 2
00:05:05.543     10:19:57 event -- scripts/common.sh@353 -- # local d=2
00:05:05.543     10:19:57 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:05.543     10:19:57 event -- scripts/common.sh@355 -- # echo 2
00:05:05.543    10:19:57 event -- scripts/common.sh@366 -- # ver2[v]=2
00:05:05.543    10:19:57 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:05.543    10:19:57 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:05.543    10:19:57 event -- scripts/common.sh@368 -- # return 0
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:05.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.543  		--rc genhtml_branch_coverage=1
00:05:05.543  		--rc genhtml_function_coverage=1
00:05:05.543  		--rc genhtml_legend=1
00:05:05.543  		--rc geninfo_all_blocks=1
00:05:05.543  		--rc geninfo_unexecuted_blocks=1
00:05:05.543  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:05.543  		'
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:05.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.543  		--rc genhtml_branch_coverage=1
00:05:05.543  		--rc genhtml_function_coverage=1
00:05:05.543  		--rc genhtml_legend=1
00:05:05.543  		--rc geninfo_all_blocks=1
00:05:05.543  		--rc geninfo_unexecuted_blocks=1
00:05:05.543  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:05.543  		'
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:05.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.543  		--rc genhtml_branch_coverage=1
00:05:05.543  		--rc genhtml_function_coverage=1
00:05:05.543  		--rc genhtml_legend=1
00:05:05.543  		--rc geninfo_all_blocks=1
00:05:05.543  		--rc geninfo_unexecuted_blocks=1
00:05:05.543  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:05.543  		'
00:05:05.543    10:19:57 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:05.543  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:05.543  		--rc genhtml_branch_coverage=1
00:05:05.543  		--rc genhtml_function_coverage=1
00:05:05.543  		--rc genhtml_legend=1
00:05:05.543  		--rc geninfo_all_blocks=1
00:05:05.543  		--rc geninfo_unexecuted_blocks=1
00:05:05.543  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:05.543  		'
00:05:05.543   10:19:57 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:05:05.543    10:19:57 event -- bdev/nbd_common.sh@6 -- # set -e
00:05:05.807   10:19:57 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:05.807   10:19:57 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:05:05.807   10:19:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:05.807   10:19:57 event -- common/autotest_common.sh@10 -- # set +x
00:05:05.807  ************************************
00:05:05.807  START TEST event_perf
00:05:05.807  ************************************
00:05:05.807   10:19:57 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1
00:05:05.807  Running I/O for 1 seconds...[2024-12-09 10:19:57.716337] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:05.807  [2024-12-09 10:19:57.716536] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:06.069  EAL: TSC is not safe to use in SMP mode
00:05:06.069  EAL: TSC is not invariant
00:05:06.069  [2024-12-09 10:19:58.029189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:05:06.069  [2024-12-09 10:19:58.055095] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:06.069  [2024-12-09 10:19:58.055127] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:06.069  [2024-12-09 10:19:58.055132] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2].
00:05:06.069  [2024-12-09 10:19:58.055137] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3].
00:05:06.069  [2024-12-09 10:19:58.055327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:06.069  [2024-12-09 10:19:58.055601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:06.069  Running I/O for 1 seconds...[2024-12-09 10:19:58.055837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:06.069  [2024-12-09 10:19:58.055835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:05:07.005  
00:05:07.005  lcore  0:   350790
00:05:07.005  lcore  1:   350791
00:05:07.005  lcore  2:   350794
00:05:07.005  lcore  3:   350792
00:05:07.005  done.
00:05:07.005  
00:05:07.005  real	0m1.386s
00:05:07.005  user	0m4.048s
00:05:07.005  sys	0m0.331s
00:05:07.005   10:19:59 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:07.005  ************************************
00:05:07.005  END TEST event_perf
00:05:07.005   10:19:59 event.event_perf -- common/autotest_common.sh@10 -- # set +x
00:05:07.005  ************************************
00:05:07.005   10:19:59 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:05:07.006   10:19:59 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:07.006   10:19:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:07.006   10:19:59 event -- common/autotest_common.sh@10 -- # set +x
00:05:07.006  ************************************
00:05:07.006  START TEST event_reactor
00:05:07.006  ************************************
00:05:07.006   10:19:59 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1
00:05:07.266  [2024-12-09 10:19:59.166961] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:07.267  [2024-12-09 10:19:59.167227] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:07.524  EAL: TSC is not safe to use in SMP mode
00:05:07.524  EAL: TSC is not invariant
00:05:07.524  [2024-12-09 10:19:59.470162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:07.524  [2024-12-09 10:19:59.495018] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:07.524  [2024-12-09 10:19:59.495054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:08.458  test_start
00:05:08.458  oneshot
00:05:08.458  tick 100
00:05:08.458  tick 100
00:05:08.458  tick 250
00:05:08.458  tick 100
00:05:08.458  tick 100
00:05:08.458  tick 100
00:05:08.458  tick 250
00:05:08.458  tick 500
00:05:08.458  tick 100
00:05:08.458  tick 100
00:05:08.458  tick 250
00:05:08.458  tick 100
00:05:08.458  tick 100
00:05:08.458  test_end
00:05:08.458  
00:05:08.458  real	0m1.375s
00:05:08.458  user	0m1.049s
00:05:08.458  sys	0m0.316s
00:05:08.458   10:20:00 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:08.458   10:20:00 event.event_reactor -- common/autotest_common.sh@10 -- # set +x
00:05:08.458  ************************************
00:05:08.458  END TEST event_reactor
00:05:08.458  ************************************
00:05:08.458   10:20:00 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:08.458   10:20:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:05:08.458   10:20:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:08.458   10:20:00 event -- common/autotest_common.sh@10 -- # set +x
00:05:08.458  ************************************
00:05:08.458  START TEST event_reactor_perf
00:05:08.459  ************************************
00:05:08.459   10:20:00 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1
00:05:08.718  [2024-12-09 10:20:00.617535] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:08.718  [2024-12-09 10:20:00.617747] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:08.990  EAL: TSC is not safe to use in SMP mode
00:05:08.990  EAL: TSC is not invariant
00:05:08.990  [2024-12-09 10:20:00.918564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:08.990  [2024-12-09 10:20:00.949113] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:08.990  [2024-12-09 10:20:00.949162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:09.924  test_start
00:05:09.924  test_end
00:05:09.924  Performance:  3502958 events per second
00:05:09.924  
00:05:09.924  real	0m1.385s
00:05:09.924  user	0m1.052s
00:05:09.924  sys	0m0.330s
00:05:09.924   10:20:01 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:09.924   10:20:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x
00:05:09.924  ************************************
00:05:09.924  END TEST event_reactor_perf
00:05:09.924  ************************************
00:05:09.924    10:20:02 event -- event/event.sh@49 -- # uname -s
00:05:09.924   10:20:02 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']'
00:05:09.924  
00:05:09.924  real	0m4.509s
00:05:09.924  user	0m6.314s
00:05:09.924  sys	0m1.145s
00:05:09.924   10:20:02 event -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:09.924   10:20:02 event -- common/autotest_common.sh@10 -- # set +x
00:05:09.924  ************************************
00:05:09.924  END TEST event
00:05:09.924  ************************************
00:05:09.924   10:20:02  -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:09.924   10:20:02  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:09.924   10:20:02  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:09.924   10:20:02  -- common/autotest_common.sh@10 -- # set +x
00:05:09.924  ************************************
00:05:09.924  START TEST thread
00:05:09.924  ************************************
00:05:09.925   10:20:02 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh
00:05:10.184  * Looking for test storage...
00:05:10.184  * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:10.184     10:20:02 thread -- common/autotest_common.sh@1711 -- # lcov --version
00:05:10.184     10:20:02 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:10.184    10:20:02 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:10.184    10:20:02 thread -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:10.184    10:20:02 thread -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:10.184    10:20:02 thread -- scripts/common.sh@336 -- # IFS=.-:
00:05:10.184    10:20:02 thread -- scripts/common.sh@336 -- # read -ra ver1
00:05:10.184    10:20:02 thread -- scripts/common.sh@337 -- # IFS=.-:
00:05:10.184    10:20:02 thread -- scripts/common.sh@337 -- # read -ra ver2
00:05:10.184    10:20:02 thread -- scripts/common.sh@338 -- # local 'op=<'
00:05:10.184    10:20:02 thread -- scripts/common.sh@340 -- # ver1_l=2
00:05:10.184    10:20:02 thread -- scripts/common.sh@341 -- # ver2_l=1
00:05:10.184    10:20:02 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:10.184    10:20:02 thread -- scripts/common.sh@344 -- # case "$op" in
00:05:10.184    10:20:02 thread -- scripts/common.sh@345 -- # : 1
00:05:10.184    10:20:02 thread -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:10.184    10:20:02 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:10.184     10:20:02 thread -- scripts/common.sh@365 -- # decimal 1
00:05:10.184     10:20:02 thread -- scripts/common.sh@353 -- # local d=1
00:05:10.184     10:20:02 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:10.184     10:20:02 thread -- scripts/common.sh@355 -- # echo 1
00:05:10.184    10:20:02 thread -- scripts/common.sh@365 -- # ver1[v]=1
00:05:10.184     10:20:02 thread -- scripts/common.sh@366 -- # decimal 2
00:05:10.184     10:20:02 thread -- scripts/common.sh@353 -- # local d=2
00:05:10.184     10:20:02 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:10.184     10:20:02 thread -- scripts/common.sh@355 -- # echo 2
00:05:10.184    10:20:02 thread -- scripts/common.sh@366 -- # ver2[v]=2
00:05:10.184    10:20:02 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:10.184    10:20:02 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:10.184    10:20:02 thread -- scripts/common.sh@368 -- # return 0
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:10.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.184  		--rc genhtml_branch_coverage=1
00:05:10.184  		--rc genhtml_function_coverage=1
00:05:10.184  		--rc genhtml_legend=1
00:05:10.184  		--rc geninfo_all_blocks=1
00:05:10.184  		--rc geninfo_unexecuted_blocks=1
00:05:10.184  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:10.184  		'
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:10.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.184  		--rc genhtml_branch_coverage=1
00:05:10.184  		--rc genhtml_function_coverage=1
00:05:10.184  		--rc genhtml_legend=1
00:05:10.184  		--rc geninfo_all_blocks=1
00:05:10.184  		--rc geninfo_unexecuted_blocks=1
00:05:10.184  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:10.184  		'
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:10.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.184  		--rc genhtml_branch_coverage=1
00:05:10.184  		--rc genhtml_function_coverage=1
00:05:10.184  		--rc genhtml_legend=1
00:05:10.184  		--rc geninfo_all_blocks=1
00:05:10.184  		--rc geninfo_unexecuted_blocks=1
00:05:10.184  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:10.184  		'
00:05:10.184    10:20:02 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:10.184  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:10.184  		--rc genhtml_branch_coverage=1
00:05:10.184  		--rc genhtml_function_coverage=1
00:05:10.184  		--rc genhtml_legend=1
00:05:10.184  		--rc geninfo_all_blocks=1
00:05:10.184  		--rc geninfo_unexecuted_blocks=1
00:05:10.184  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:10.184  		'
00:05:10.184   10:20:02 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:10.184   10:20:02 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:10.184   10:20:02 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:10.184   10:20:02 thread -- common/autotest_common.sh@10 -- # set +x
00:05:10.184  ************************************
00:05:10.184  START TEST thread_poller_perf
00:05:10.184  ************************************
00:05:10.184   10:20:02 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1
00:05:10.184  [2024-12-09 10:20:02.273893] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:10.185  [2024-12-09 10:20:02.274047] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:10.445  EAL: TSC is not safe to use in SMP mode
00:05:10.445  EAL: TSC is not invariant
00:05:10.445  [2024-12-09 10:20:02.599230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:10.704  Running 1000 pollers for 1 seconds with 1 microseconds period.
00:05:10.704  [2024-12-09 10:20:02.627930] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:10.704  [2024-12-09 10:20:02.627983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:11.646  
[2024-12-09T10:20:03.807Z]  ======================================
00:05:11.646  
[2024-12-09T10:20:03.807Z]  busy:2601329810 (cyc)
00:05:11.646  
[2024-12-09T10:20:03.807Z]  total_run_count: 6064000
00:05:11.646  
[2024-12-09T10:20:03.807Z]  tsc_hz: 2599999259 (cyc)
00:05:11.646  
[2024-12-09T10:20:03.807Z]  ======================================
00:05:11.646  
[2024-12-09T10:20:03.807Z]  poller_cost: 428 (cyc), 164 (nsec)
00:05:11.646  
00:05:11.646  real	0m1.411s
00:05:11.646  user	0m1.062s
00:05:11.646  sys	0m0.343s
00:05:11.646   10:20:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:11.646  ************************************
00:05:11.646  END TEST thread_poller_perf
00:05:11.646  ************************************
00:05:11.646   10:20:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:11.646   10:20:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:11.646   10:20:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']'
00:05:11.646   10:20:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:11.646   10:20:03 thread -- common/autotest_common.sh@10 -- # set +x
00:05:11.646  ************************************
00:05:11.646  START TEST thread_poller_perf
00:05:11.646  ************************************
00:05:11.646   10:20:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1
00:05:11.646  [2024-12-09 10:20:03.767070] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:11.646  [2024-12-09 10:20:03.767306] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:12.216  EAL: TSC is not safe to use in SMP mode
00:05:12.216  EAL: TSC is not invariant
00:05:12.216  [2024-12-09 10:20:04.082668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:12.216  Running 1000 pollers for 1 seconds with 0 microseconds period.
00:05:12.216  [2024-12-09 10:20:04.114116] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:12.216  [2024-12-09 10:20:04.114173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:13.159  
[2024-12-09T10:20:05.320Z]  ======================================
00:05:13.159  
[2024-12-09T10:20:05.320Z]  busy:2600851292 (cyc)
00:05:13.159  
[2024-12-09T10:20:05.320Z]  total_run_count: 77527000
00:05:13.159  
[2024-12-09T10:20:05.320Z]  tsc_hz: 2599999259 (cyc)
00:05:13.159  
[2024-12-09T10:20:05.320Z]  ======================================
00:05:13.159  
[2024-12-09T10:20:05.320Z]  poller_cost: 33 (cyc), 12 (nsec)
00:05:13.159  
00:05:13.159  real	0m1.403s
00:05:13.159  user	0m1.050s
00:05:13.159  sys	0m0.348s
00:05:13.159  ************************************
00:05:13.159  END TEST thread_poller_perf
00:05:13.159  ************************************
00:05:13.159   10:20:05 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:13.159   10:20:05 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x
00:05:13.159   10:20:05 thread -- thread/thread.sh@17 -- # [[ n != \y ]]
00:05:13.159   10:20:05 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:05:13.159   10:20:05 thread -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:13.159   10:20:05 thread -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:13.159   10:20:05 thread -- common/autotest_common.sh@10 -- # set +x
00:05:13.159  ************************************
00:05:13.159  START TEST thread_spdk_lock
00:05:13.159  ************************************
00:05:13.159   10:20:05 thread.thread_spdk_lock -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock
00:05:13.159  [2024-12-09 10:20:05.255047] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:13.159  [2024-12-09 10:20:05.255323] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:13.422  EAL: TSC is not safe to use in SMP mode
00:05:13.422  EAL: TSC is not invariant
00:05:13.682  [2024-12-09 10:20:05.595136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:13.682  [2024-12-09 10:20:05.622819] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:13.682  [2024-12-09 10:20:05.622863] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:13.682  [2024-12-09 10:20:05.623072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:13.682  [2024-12-09 10:20:05.623238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:13.942  [2024-12-09 10:20:06.069474] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 989:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:05:13.942  [2024-12-09 10:20:06.069530] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3140:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread)
00:05:13.942  [2024-12-09 10:20:06.069537] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3095:sspin_stacks_print: *ERROR*: spinlock 0x349d20
00:05:13.942  [2024-12-09 10:20:06.069867] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:05:13.942  [2024-12-09 10:20:06.069969] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1050:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:05:13.942  [2024-12-09 10:20:06.069977] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 884:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0)
00:05:14.200  Starting test contend
00:05:14.200    Worker    Delay  Wait us  Hold us Total us
00:05:14.200         0        3   255066   166539   421606
00:05:14.200         1        5   159273   266584   425858
00:05:14.200  PASS test contend
00:05:14.200  Starting test hold_by_poller
00:05:14.200  PASS test hold_by_poller
00:05:14.200  Starting test hold_by_message
00:05:14.200  PASS test hold_by_message
00:05:14.200  /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary:
00:05:14.200     100014 assertions passed
00:05:14.200          0 assertions failed
00:05:14.200  
00:05:14.200  real	0m0.874s
00:05:14.200  user	0m0.948s
00:05:14.201  sys	0m0.365s
00:05:14.201   10:20:06 thread.thread_spdk_lock -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:14.201  ************************************
00:05:14.201  END TEST thread_spdk_lock
00:05:14.201  ************************************
00:05:14.201   10:20:06 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x
00:05:14.201  
00:05:14.201  real	0m4.120s
00:05:14.201  user	0m3.273s
00:05:14.201  sys	0m1.193s
00:05:14.201   10:20:06 thread -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:14.201  ************************************
00:05:14.201  END TEST thread
00:05:14.201  ************************************
00:05:14.201   10:20:06 thread -- common/autotest_common.sh@10 -- # set +x
00:05:14.201   10:20:06  -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]]
00:05:14.201   10:20:06  -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:14.201   10:20:06  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:14.201   10:20:06  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:14.201   10:20:06  -- common/autotest_common.sh@10 -- # set +x
00:05:14.201  ************************************
00:05:14.201  START TEST app_cmdline
00:05:14.201  ************************************
00:05:14.201   10:20:06 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh
00:05:14.459  * Looking for test storage...
00:05:14.459  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:14.459     10:20:06 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version
00:05:14.459     10:20:06 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@336 -- # IFS=.-:
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@337 -- # IFS=.-:
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@338 -- # local 'op=<'
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@344 -- # case "$op" in
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@345 -- # : 1
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@365 -- # decimal 1
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@353 -- # local d=1
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@355 -- # echo 1
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@366 -- # decimal 2
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@353 -- # local d=2
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:14.459     10:20:06 app_cmdline -- scripts/common.sh@355 -- # echo 2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:14.459    10:20:06 app_cmdline -- scripts/common.sh@368 -- # return 0
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:14.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:14.459  		--rc genhtml_branch_coverage=1
00:05:14.459  		--rc genhtml_function_coverage=1
00:05:14.459  		--rc genhtml_legend=1
00:05:14.459  		--rc geninfo_all_blocks=1
00:05:14.459  		--rc geninfo_unexecuted_blocks=1
00:05:14.459  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:14.459  		'
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:14.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:14.459  		--rc genhtml_branch_coverage=1
00:05:14.459  		--rc genhtml_function_coverage=1
00:05:14.459  		--rc genhtml_legend=1
00:05:14.459  		--rc geninfo_all_blocks=1
00:05:14.459  		--rc geninfo_unexecuted_blocks=1
00:05:14.459  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:14.459  		'
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:14.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:14.459  		--rc genhtml_branch_coverage=1
00:05:14.459  		--rc genhtml_function_coverage=1
00:05:14.459  		--rc genhtml_legend=1
00:05:14.459  		--rc geninfo_all_blocks=1
00:05:14.459  		--rc geninfo_unexecuted_blocks=1
00:05:14.459  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:14.459  		'
00:05:14.459    10:20:06 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:14.459  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:14.459  		--rc genhtml_branch_coverage=1
00:05:14.459  		--rc genhtml_function_coverage=1
00:05:14.459  		--rc genhtml_legend=1
00:05:14.459  		--rc geninfo_all_blocks=1
00:05:14.459  		--rc geninfo_unexecuted_blocks=1
00:05:14.460  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:14.460  		'
00:05:14.460   10:20:06 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT
00:05:14.460   10:20:06 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=48793
00:05:14.460   10:20:06 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 48793
00:05:14.460  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 48793 ']'
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:14.460   10:20:06 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:14.460   10:20:06 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods
00:05:14.460  [2024-12-09 10:20:06.465404] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:14.460  [2024-12-09 10:20:06.465706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:14.718  EAL: TSC is not safe to use in SMP mode
00:05:14.718  EAL: TSC is not invariant
00:05:14.718  [2024-12-09 10:20:06.768968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:14.718  [2024-12-09 10:20:06.797469] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:14.718  [2024-12-09 10:20:06.797521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:15.288   10:20:07 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:15.288   10:20:07 app_cmdline -- common/autotest_common.sh@868 -- # return 0
00:05:15.288   10:20:07 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version
00:05:15.549  {
00:05:15.549    "version": "SPDK v25.01-pre git sha1 51286f61a",
00:05:15.549    "fields": {
00:05:15.549      "major": 25,
00:05:15.549      "minor": 1,
00:05:15.549      "patch": 0,
00:05:15.549      "suffix": "-pre",
00:05:15.549      "commit": "51286f61a"
00:05:15.549    }
00:05:15.549  }
00:05:15.549   10:20:07 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=()
00:05:15.549   10:20:07 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods")
00:05:15.549   10:20:07 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version")
00:05:15.549   10:20:07 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort))
00:05:15.549    10:20:07 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods
00:05:15.550    10:20:07 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:15.550    10:20:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:15.550    10:20:07 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]'
00:05:15.550    10:20:07 app_cmdline -- app/cmdline.sh@26 -- # sort
00:05:15.550    10:20:07 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:15.550   10:20:07 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 ))
00:05:15.550   10:20:07 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]]
00:05:15.550   10:20:07 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@652 -- # local es=0
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:15.550    10:20:07 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:15.550    10:20:07 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]]
00:05:15.550   10:20:07 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats
00:05:15.810  request:
00:05:15.810  {
00:05:15.810    "method": "env_dpdk_get_mem_stats",
00:05:15.810    "req_id": 1
00:05:15.810  }
00:05:15.810  Got JSON-RPC error response
00:05:15.810  response:
00:05:15.810  {
00:05:15.810    "code": -32601,
00:05:15.810    "message": "Method not found"
00:05:15.810  }
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@655 -- # es=1
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]]
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:05:15.810   10:20:07 app_cmdline -- app/cmdline.sh@1 -- # killprocess 48793
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 48793 ']'
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 48793
00:05:15.810    10:20:07 app_cmdline -- common/autotest_common.sh@959 -- # uname
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:15.810    10:20:07 app_cmdline -- common/autotest_common.sh@962 -- # ps -c -o command 48793
00:05:15.810    10:20:07 app_cmdline -- common/autotest_common.sh@962 -- # tail -1
00:05:15.810  killing process with pid 48793
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48793'
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@973 -- # kill 48793
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@978 -- # wait 48793
00:05:15.810  
00:05:15.810  real	0m1.643s
00:05:15.810  user	0m1.883s
00:05:15.810  sys	0m0.574s
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:15.810  ************************************
00:05:15.810  END TEST app_cmdline
00:05:15.810  ************************************
00:05:15.810   10:20:07 app_cmdline -- common/autotest_common.sh@10 -- # set +x
00:05:15.810   10:20:07  -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:15.810   10:20:07  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:15.810   10:20:07  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:15.810   10:20:07  -- common/autotest_common.sh@10 -- # set +x
00:05:15.810  ************************************
00:05:15.810  START TEST version
00:05:15.810  ************************************
00:05:15.810   10:20:07 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh
00:05:16.072  * Looking for test storage...
00:05:16.072  * Found test storage at /home/vagrant/spdk_repo/spdk/test/app
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:16.072     10:20:08 version -- common/autotest_common.sh@1711 -- # lcov --version
00:05:16.072     10:20:08 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:16.072    10:20:08 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:16.072    10:20:08 version -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:16.072    10:20:08 version -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:16.072    10:20:08 version -- scripts/common.sh@336 -- # IFS=.-:
00:05:16.072    10:20:08 version -- scripts/common.sh@336 -- # read -ra ver1
00:05:16.072    10:20:08 version -- scripts/common.sh@337 -- # IFS=.-:
00:05:16.072    10:20:08 version -- scripts/common.sh@337 -- # read -ra ver2
00:05:16.072    10:20:08 version -- scripts/common.sh@338 -- # local 'op=<'
00:05:16.072    10:20:08 version -- scripts/common.sh@340 -- # ver1_l=2
00:05:16.072    10:20:08 version -- scripts/common.sh@341 -- # ver2_l=1
00:05:16.072    10:20:08 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:16.072    10:20:08 version -- scripts/common.sh@344 -- # case "$op" in
00:05:16.072    10:20:08 version -- scripts/common.sh@345 -- # : 1
00:05:16.072    10:20:08 version -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:16.072    10:20:08 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:16.072     10:20:08 version -- scripts/common.sh@365 -- # decimal 1
00:05:16.072     10:20:08 version -- scripts/common.sh@353 -- # local d=1
00:05:16.072     10:20:08 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:16.072     10:20:08 version -- scripts/common.sh@355 -- # echo 1
00:05:16.072    10:20:08 version -- scripts/common.sh@365 -- # ver1[v]=1
00:05:16.072     10:20:08 version -- scripts/common.sh@366 -- # decimal 2
00:05:16.072     10:20:08 version -- scripts/common.sh@353 -- # local d=2
00:05:16.072     10:20:08 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:16.072     10:20:08 version -- scripts/common.sh@355 -- # echo 2
00:05:16.072    10:20:08 version -- scripts/common.sh@366 -- # ver2[v]=2
00:05:16.072    10:20:08 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:16.072    10:20:08 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:16.072    10:20:08 version -- scripts/common.sh@368 -- # return 0
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:16.072  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.072  		--rc genhtml_branch_coverage=1
00:05:16.072  		--rc genhtml_function_coverage=1
00:05:16.072  		--rc genhtml_legend=1
00:05:16.072  		--rc geninfo_all_blocks=1
00:05:16.072  		--rc geninfo_unexecuted_blocks=1
00:05:16.072  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.072  		'
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:16.072  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.072  		--rc genhtml_branch_coverage=1
00:05:16.072  		--rc genhtml_function_coverage=1
00:05:16.072  		--rc genhtml_legend=1
00:05:16.072  		--rc geninfo_all_blocks=1
00:05:16.072  		--rc geninfo_unexecuted_blocks=1
00:05:16.072  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.072  		'
00:05:16.072    10:20:08 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:16.072  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.072  		--rc genhtml_branch_coverage=1
00:05:16.072  		--rc genhtml_function_coverage=1
00:05:16.072  		--rc genhtml_legend=1
00:05:16.073  		--rc geninfo_all_blocks=1
00:05:16.073  		--rc geninfo_unexecuted_blocks=1
00:05:16.073  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.073  		'
00:05:16.073    10:20:08 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:16.073  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.073  		--rc genhtml_branch_coverage=1
00:05:16.073  		--rc genhtml_function_coverage=1
00:05:16.073  		--rc genhtml_legend=1
00:05:16.073  		--rc geninfo_all_blocks=1
00:05:16.073  		--rc geninfo_unexecuted_blocks=1
00:05:16.073  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.073  		'
00:05:16.073    10:20:08 version -- app/version.sh@17 -- # get_header_version major
00:05:16.073    10:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # cut -f2
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # tr -d '"'
00:05:16.073   10:20:08 version -- app/version.sh@17 -- # major=25
00:05:16.073    10:20:08 version -- app/version.sh@18 -- # get_header_version minor
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # cut -f2
00:05:16.073    10:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # tr -d '"'
00:05:16.073   10:20:08 version -- app/version.sh@18 -- # minor=1
00:05:16.073    10:20:08 version -- app/version.sh@19 -- # get_header_version patch
00:05:16.073    10:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # tr -d '"'
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # cut -f2
00:05:16.073   10:20:08 version -- app/version.sh@19 -- # patch=0
00:05:16.073    10:20:08 version -- app/version.sh@20 -- # get_header_version suffix
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # cut -f2
00:05:16.073    10:20:08 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h
00:05:16.073    10:20:08 version -- app/version.sh@14 -- # tr -d '"'
00:05:16.073   10:20:08 version -- app/version.sh@20 -- # suffix=-pre
00:05:16.073   10:20:08 version -- app/version.sh@22 -- # version=25.1
00:05:16.073   10:20:08 version -- app/version.sh@25 -- # (( patch != 0 ))
00:05:16.073   10:20:08 version -- app/version.sh@28 -- # version=25.1rc0
00:05:16.073   10:20:08 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python
00:05:16.073    10:20:08 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)'
00:05:16.073   10:20:08 version -- app/version.sh@30 -- # py_version=25.1rc0
00:05:16.073   10:20:08 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]]
00:05:16.073  
00:05:16.073  real	0m0.236s
00:05:16.073  user	0m0.158s
00:05:16.073  sys	0m0.133s
00:05:16.073   10:20:08 version -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:16.073  ************************************
00:05:16.073  END TEST version
00:05:16.073  ************************************
00:05:16.073   10:20:08 version -- common/autotest_common.sh@10 -- # set +x
00:05:16.334   10:20:08  -- spdk/autotest.sh@179 -- # '[' 1 -eq 1 ']'
00:05:16.334   10:20:08  -- spdk/autotest.sh@180 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:05:16.334   10:20:08  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:05:16.334   10:20:08  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:16.334   10:20:08  -- common/autotest_common.sh@10 -- # set +x
00:05:16.334  ************************************
00:05:16.334  START TEST blockdev_general
00:05:16.334  ************************************
00:05:16.334   10:20:08 blockdev_general -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh
00:05:16.334  * Looking for test storage...
00:05:16.334  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:05:16.334     10:20:08 blockdev_general -- common/autotest_common.sh@1711 -- # lcov --version
00:05:16.334     10:20:08 blockdev_general -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@333 -- # local ver1 ver1_l
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@334 -- # local ver2 ver2_l
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@336 -- # IFS=.-:
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@336 -- # read -ra ver1
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@337 -- # IFS=.-:
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@337 -- # read -ra ver2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@338 -- # local 'op=<'
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@340 -- # ver1_l=2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@341 -- # ver2_l=1
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@344 -- # case "$op" in
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@345 -- # : 1
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@364 -- # (( v = 0 ))
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@365 -- # decimal 1
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@353 -- # local d=1
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@355 -- # echo 1
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@365 -- # ver1[v]=1
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@366 -- # decimal 2
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@353 -- # local d=2
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:05:16.334     10:20:08 blockdev_general -- scripts/common.sh@355 -- # echo 2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@366 -- # ver2[v]=2
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:05:16.334    10:20:08 blockdev_general -- scripts/common.sh@368 -- # return 0
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:05:16.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.334  		--rc genhtml_branch_coverage=1
00:05:16.334  		--rc genhtml_function_coverage=1
00:05:16.334  		--rc genhtml_legend=1
00:05:16.334  		--rc geninfo_all_blocks=1
00:05:16.334  		--rc geninfo_unexecuted_blocks=1
00:05:16.334  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.334  		'
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:05:16.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.334  		--rc genhtml_branch_coverage=1
00:05:16.334  		--rc genhtml_function_coverage=1
00:05:16.334  		--rc genhtml_legend=1
00:05:16.334  		--rc geninfo_all_blocks=1
00:05:16.334  		--rc geninfo_unexecuted_blocks=1
00:05:16.334  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.334  		'
00:05:16.334    10:20:08 blockdev_general -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:05:16.334  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.334  		--rc genhtml_branch_coverage=1
00:05:16.334  		--rc genhtml_function_coverage=1
00:05:16.334  		--rc genhtml_legend=1
00:05:16.335  		--rc geninfo_all_blocks=1
00:05:16.335  		--rc geninfo_unexecuted_blocks=1
00:05:16.335  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.335  		'
00:05:16.335    10:20:08 blockdev_general -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:05:16.335  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:05:16.335  		--rc genhtml_branch_coverage=1
00:05:16.335  		--rc genhtml_function_coverage=1
00:05:16.335  		--rc genhtml_legend=1
00:05:16.335  		--rc geninfo_all_blocks=1
00:05:16.335  		--rc geninfo_unexecuted_blocks=1
00:05:16.335  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:05:16.335  		'
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:05:16.335    10:20:08 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@20 -- # :
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:05:16.335    10:20:08 blockdev_general -- bdev/blockdev.sh@711 -- # uname -s
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@711 -- # '[' FreeBSD = Linux ']'
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@716 -- # PRE_RESERVED_MEM=2048
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@719 -- # test_type=bdev
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@720 -- # crypto_device=
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@721 -- # dek=
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@722 -- # env_ctx=
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@727 -- # [[ bdev == bdev ]]
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@728 -- # wait_for_rpc=--wait-for-rpc
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=48944
00:05:16.335  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 48944
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@835 -- # '[' -z 48944 ']'
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:16.335   10:20:08 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:16.335   10:20:08 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:16.335  [2024-12-09 10:20:08.486469] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:16.335  [2024-12-09 10:20:08.486653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:16.900  EAL: TSC is not safe to use in SMP mode
00:05:16.900  EAL: TSC is not invariant
00:05:16.900  [2024-12-09 10:20:08.805333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:16.900  [2024-12-09 10:20:08.834568] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:16.900  [2024-12-09 10:20:08.834611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:17.468   10:20:09 blockdev_general -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@868 -- # return 0
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@733 -- # setup_bdev_conf
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.469  [2024-12-09 10:20:09.422513] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:17.469  [2024-12-09 10:20:09.422563] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:17.469  
00:05:17.469  [2024-12-09 10:20:09.430495] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:17.469  [2024-12-09 10:20:09.430529] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:17.469  
00:05:17.469  Malloc0
00:05:17.469  Malloc1
00:05:17.469  Malloc2
00:05:17.469  Malloc3
00:05:17.469  Malloc4
00:05:17.469  Malloc5
00:05:17.469  Malloc6
00:05:17.469  Malloc7
00:05:17.469  Malloc8
00:05:17.469  Malloc9
00:05:17.469  [2024-12-09 10:20:09.518511] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:17.469  [2024-12-09 10:20:09.518559] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:17.469  [2024-12-09 10:20:09.518578] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a2fc163d980
00:05:17.469  [2024-12-09 10:20:09.518585] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:17.469  [2024-12-09 10:20:09.518982] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:17.469  [2024-12-09 10:20:09.519015] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:17.469  TestPT
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000
00:05:17.469  5000+0 records in
00:05:17.469  5000+0 records out
00:05:17.469  10240000 bytes transferred in 0.018738 secs (546494515 bytes/sec)
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.469  AIO0
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.469   10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.469   10:20:09 blockdev_general -- bdev/blockdev.sh@777 -- # cat
00:05:17.469    10:20:09 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:05:17.469    10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.469    10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.469    10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.469    10:20:09 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:05:17.469    10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.469    10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.729    10:20:09 blockdev_general -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.729   10:20:09 blockdev_general -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:05:17.729    10:20:09 blockdev_general -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:05:17.729    10:20:09 blockdev_general -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.729    10:20:09 blockdev_general -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:17.729   10:20:09 blockdev_general -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:05:17.729    10:20:09 blockdev_general -- bdev/blockdev.sh@786 -- # jq -r .name
00:05:17.730    10:20:09 blockdev_general -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "c768cce6-185b-8558-a0d0-020e682a053e"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c768cce6-185b-8558-a0d0-020e682a053e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "82b329b3-b8ee-3f5a-a988-d5bf8167c32a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "82b329b3-b8ee-3f5a-a988-d5bf8167c32a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "6745dcca-a40f-6457-9052-f0527576d2c7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6745dcca-a40f-6457-9052-f0527576d2c7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ed2ab71f-c8a6-b955-83c6-156321ecd3cf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ed2ab71f-c8a6-b955-83c6-156321ecd3cf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "17b5c89a-aa70-ff51-9979-f8033ba888a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "17b5c89a-aa70-ff51-9979-f8033ba888a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "37dc8db0-6f24-ee53-ba07-ba486fa38026"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "37dc8db0-6f24-ee53-ba07-ba486fa38026",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "c37dc109-5396-8854-a974-62e4204c5b07"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c37dc109-5396-8854-a974-62e4204c5b07",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "da33b633-1897-c75f-989d-07c4ed6095c9"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "da33b633-1897-c75f-989d-07c4ed6095c9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "34186b01-b4ec-7a50-b86c-51fbc024bb82"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "34186b01-b4ec-7a50-b86c-51fbc024bb82",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "2b17f92e-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "2b0f5e16-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "2b10966d-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "2b192311-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "2b11cf52-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2b13075a-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "2b143fcd-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2b1578c9-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "2b21aea8-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "2b21aea8-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:05:17.730   10:20:09 blockdev_general -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:05:17.730   10:20:09 blockdev_general -- bdev/blockdev.sh@789 -- # hello_world_bdev=Malloc0
00:05:17.730   10:20:09 blockdev_general -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:05:17.730   10:20:09 blockdev_general -- bdev/blockdev.sh@791 -- # killprocess 48944
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@954 -- # '[' -z 48944 ']'
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@958 -- # kill -0 48944
00:05:17.730    10:20:09 blockdev_general -- common/autotest_common.sh@959 -- # uname
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:17.730    10:20:09 blockdev_general -- common/autotest_common.sh@962 -- # ps -c -o command 48944
00:05:17.730    10:20:09 blockdev_general -- common/autotest_common.sh@962 -- # tail -1
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:05:17.730  killing process with pid 48944
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48944'
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@973 -- # kill 48944
00:05:17.730   10:20:09 blockdev_general -- common/autotest_common.sh@978 -- # wait 48944
00:05:17.999   10:20:09 blockdev_general -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:05:17.999   10:20:09 blockdev_general -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:05:17.999   10:20:09 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:05:17.999   10:20:09 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:17.999   10:20:09 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:17.999  ************************************
00:05:17.999  START TEST bdev_hello_world
00:05:17.999  ************************************
00:05:17.999   10:20:09 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 ''
00:05:17.999  [2024-12-09 10:20:09.976904] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:17.999  [2024-12-09 10:20:09.977233] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:18.257  EAL: TSC is not safe to use in SMP mode
00:05:18.257  EAL: TSC is not invariant
00:05:18.257  [2024-12-09 10:20:10.290807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:18.257  [2024-12-09 10:20:10.325273] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:18.257  [2024-12-09 10:20:10.325397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:18.257  [2024-12-09 10:20:10.382706] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:18.257  [2024-12-09 10:20:10.382747] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:18.257  [2024-12-09 10:20:10.390683] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:18.257  [2024-12-09 10:20:10.390705] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:18.257  [2024-12-09 10:20:10.398695] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:18.257  [2024-12-09 10:20:10.398717] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:05:18.257  [2024-12-09 10:20:10.398723] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:05:18.514  [2024-12-09 10:20:10.446709] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:18.514  [2024-12-09 10:20:10.446756] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:18.514  [2024-12-09 10:20:10.446764] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26f259c38800
00:05:18.514  [2024-12-09 10:20:10.446771] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:18.514  [2024-12-09 10:20:10.447186] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:18.514  [2024-12-09 10:20:10.447213] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:18.514  [2024-12-09 10:20:10.546788] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:05:18.514  [2024-12-09 10:20:10.546830] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0
00:05:18.514  [2024-12-09 10:20:10.546840] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:05:18.514  [2024-12-09 10:20:10.546852] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:05:18.514  [2024-12-09 10:20:10.546864] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:05:18.514  [2024-12-09 10:20:10.546871] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:05:18.514  [2024-12-09 10:20:10.546881] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:05:18.514  
00:05:18.514  [2024-12-09 10:20:10.546888] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:05:18.514  
00:05:18.514  real	0m0.688s
00:05:18.514  user	0m0.359s
00:05:18.514  sys	0m0.327s
00:05:18.514   10:20:10 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:18.514   10:20:10 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:05:18.514  ************************************
00:05:18.514  END TEST bdev_hello_world
00:05:18.514  ************************************
00:05:18.772   10:20:10 blockdev_general -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:05:18.772   10:20:10 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:18.772   10:20:10 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:18.772   10:20:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:18.772  ************************************
00:05:18.772  START TEST bdev_bounds
00:05:18.772  ************************************
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=48992
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:05:18.772  Process bdevio pid: 48992
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 48992'
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 48992
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 48992 ']'
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:18.772  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:18.772   10:20:10 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:05:18.772  [2024-12-09 10:20:10.703571] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:18.772  [2024-12-09 10:20:10.703790] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:19.030  EAL: TSC is not safe to use in SMP mode
00:05:19.030  EAL: TSC is not invariant
00:05:19.030  [2024-12-09 10:20:11.007704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:05:19.030  [2024-12-09 10:20:11.036646] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:19.030  [2024-12-09 10:20:11.036694] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:19.030  [2024-12-09 10:20:11.036701] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2].
00:05:19.030  [2024-12-09 10:20:11.036789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:19.030  [2024-12-09 10:20:11.036885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:19.030  [2024-12-09 10:20:11.036885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:05:19.030  [2024-12-09 10:20:11.094538] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:19.030  [2024-12-09 10:20:11.094585] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:19.030  [2024-12-09 10:20:11.102535] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:19.030  [2024-12-09 10:20:11.102582] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:19.030  [2024-12-09 10:20:11.110545] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:19.030  [2024-12-09 10:20:11.110574] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:05:19.030  [2024-12-09 10:20:11.110581] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:05:19.030  [2024-12-09 10:20:11.158567] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:19.030  [2024-12-09 10:20:11.158611] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:19.030  [2024-12-09 10:20:11.158619] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d9542638800
00:05:19.030  [2024-12-09 10:20:11.158625] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:19.030  [2024-12-09 10:20:11.159217] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:19.030  [2024-12-09 10:20:11.159243] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:19.597   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:19.597   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:05:19.597   10:20:11 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:05:19.597  I/O targets:
00:05:19.597    Malloc0: 65536 blocks of 512 bytes (32 MiB)
00:05:19.597    Malloc1p0: 32768 blocks of 512 bytes (16 MiB)
00:05:19.597    Malloc1p1: 32768 blocks of 512 bytes (16 MiB)
00:05:19.597    Malloc2p0: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p1: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p2: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p3: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p4: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p5: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p6: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    Malloc2p7: 8192 blocks of 512 bytes (4 MiB)
00:05:19.597    TestPT: 65536 blocks of 512 bytes (32 MiB)
00:05:19.597    raid0: 131072 blocks of 512 bytes (64 MiB)
00:05:19.597    concat0: 131072 blocks of 512 bytes (64 MiB)
00:05:19.597    raid1: 65536 blocks of 512 bytes (32 MiB)
00:05:19.597    AIO0: 5000 blocks of 2048 bytes (10 MiB)
00:05:19.597  
00:05:19.597  
00:05:19.597       CUnit - A unit testing framework for C - Version 2.1-3
00:05:19.597       http://cunit.sourceforge.net/
00:05:19.597  
00:05:19.597  
00:05:19.597  Suite: bdevio tests on: AIO0
00:05:19.597    Test: blockdev write read block ...passed
00:05:19.597    Test: blockdev write zeroes read block ...passed
00:05:19.597    Test: blockdev write zeroes read no split ...passed
00:05:19.597    Test: blockdev write zeroes read split ...passed
00:05:19.597    Test: blockdev write zeroes read split partial ...passed
00:05:19.597    Test: blockdev reset ...passed
00:05:19.597    Test: blockdev write read 8 blocks ...passed
00:05:19.597    Test: blockdev write read size > 128k ...passed
00:05:19.597    Test: blockdev write read invalid size ...passed
00:05:19.597    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.597    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.597    Test: blockdev write read max offset ...passed
00:05:19.597    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.597    Test: blockdev writev readv 8 blocks ...passed
00:05:19.597    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.597    Test: blockdev writev readv block ...passed
00:05:19.597    Test: blockdev writev readv size > 128k ...passed
00:05:19.597    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.597    Test: blockdev comparev and writev ...passed
00:05:19.597    Test: blockdev nvme passthru rw ...passed
00:05:19.597    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.597    Test: blockdev nvme admin passthru ...passed
00:05:19.597    Test: blockdev copy ...passed
00:05:19.597  Suite: bdevio tests on: raid1
00:05:19.597    Test: blockdev write read block ...passed
00:05:19.597    Test: blockdev write zeroes read block ...passed
00:05:19.597    Test: blockdev write zeroes read no split ...passed
00:05:19.597    Test: blockdev write zeroes read split ...passed
00:05:19.597    Test: blockdev write zeroes read split partial ...passed
00:05:19.597    Test: blockdev reset ...passed
00:05:19.597    Test: blockdev write read 8 blocks ...passed
00:05:19.597    Test: blockdev write read size > 128k ...passed
00:05:19.597    Test: blockdev write read invalid size ...passed
00:05:19.597    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.597    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.597    Test: blockdev write read max offset ...passed
00:05:19.597    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.597    Test: blockdev writev readv 8 blocks ...passed
00:05:19.597    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.597    Test: blockdev writev readv block ...passed
00:05:19.597    Test: blockdev writev readv size > 128k ...passed
00:05:19.597    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.597    Test: blockdev comparev and writev ...passed
00:05:19.597    Test: blockdev nvme passthru rw ...passed
00:05:19.597    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.597    Test: blockdev nvme admin passthru ...passed
00:05:19.597    Test: blockdev copy ...passed
00:05:19.597  Suite: bdevio tests on: concat0
00:05:19.597    Test: blockdev write read block ...passed
00:05:19.597    Test: blockdev write zeroes read block ...passed
00:05:19.597    Test: blockdev write zeroes read no split ...passed
00:05:19.597    Test: blockdev write zeroes read split ...passed
00:05:19.597    Test: blockdev write zeroes read split partial ...passed
00:05:19.597    Test: blockdev reset ...passed
00:05:19.597    Test: blockdev write read 8 blocks ...passed
00:05:19.597    Test: blockdev write read size > 128k ...passed
00:05:19.597    Test: blockdev write read invalid size ...passed
00:05:19.597    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.597    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.597    Test: blockdev write read max offset ...passed
00:05:19.597    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.597    Test: blockdev writev readv 8 blocks ...passed
00:05:19.597    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.597    Test: blockdev writev readv block ...passed
00:05:19.597    Test: blockdev writev readv size > 128k ...passed
00:05:19.597    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.597    Test: blockdev comparev and writev ...passed
00:05:19.597    Test: blockdev nvme passthru rw ...passed
00:05:19.597    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.597    Test: blockdev nvme admin passthru ...passed
00:05:19.598    Test: blockdev copy ...passed
00:05:19.598  Suite: bdevio tests on: raid0
00:05:19.598    Test: blockdev write read block ...passed
00:05:19.598    Test: blockdev write zeroes read block ...passed
00:05:19.598    Test: blockdev write zeroes read no split ...passed
00:05:19.598    Test: blockdev write zeroes read split ...passed
00:05:19.598    Test: blockdev write zeroes read split partial ...passed
00:05:19.598    Test: blockdev reset ...passed
00:05:19.598    Test: blockdev write read 8 blocks ...passed
00:05:19.598    Test: blockdev write read size > 128k ...passed
00:05:19.598    Test: blockdev write read invalid size ...passed
00:05:19.598    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.598    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.598    Test: blockdev write read max offset ...passed
00:05:19.598    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.598    Test: blockdev writev readv 8 blocks ...passed
00:05:19.598    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.598    Test: blockdev writev readv block ...passed
00:05:19.598    Test: blockdev writev readv size > 128k ...passed
00:05:19.598    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.598    Test: blockdev comparev and writev ...passed
00:05:19.598    Test: blockdev nvme passthru rw ...passed
00:05:19.598    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.598    Test: blockdev nvme admin passthru ...passed
00:05:19.598    Test: blockdev copy ...passed
00:05:19.598  Suite: bdevio tests on: TestPT
00:05:19.598    Test: blockdev write read block ...passed
00:05:19.598    Test: blockdev write zeroes read block ...passed
00:05:19.598    Test: blockdev write zeroes read no split ...passed
00:05:19.598    Test: blockdev write zeroes read split ...passed
00:05:19.598    Test: blockdev write zeroes read split partial ...passed
00:05:19.598    Test: blockdev reset ...passed
00:05:19.956    Test: blockdev write read 8 blocks ...passed
00:05:19.956    Test: blockdev write read size > 128k ...passed
00:05:19.956    Test: blockdev write read invalid size ...passed
00:05:19.956    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.956    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.956    Test: blockdev write read max offset ...passed
00:05:19.956    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.956    Test: blockdev writev readv 8 blocks ...passed
00:05:19.956    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.956    Test: blockdev writev readv block ...passed
00:05:19.956    Test: blockdev writev readv size > 128k ...passed
00:05:19.956    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.956    Test: blockdev comparev and writev ...passed
00:05:19.956    Test: blockdev nvme passthru rw ...passed
00:05:19.956    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.956    Test: blockdev nvme admin passthru ...passed
00:05:19.956    Test: blockdev copy ...passed
00:05:19.956  Suite: bdevio tests on: Malloc2p7
00:05:19.956    Test: blockdev write read block ...passed
00:05:19.956    Test: blockdev write zeroes read block ...passed
00:05:19.956    Test: blockdev write zeroes read no split ...passed
00:05:19.956    Test: blockdev write zeroes read split ...passed
00:05:19.956    Test: blockdev write zeroes read split partial ...passed
00:05:19.956    Test: blockdev reset ...passed
00:05:19.956    Test: blockdev write read 8 blocks ...passed
00:05:19.956    Test: blockdev write read size > 128k ...passed
00:05:19.956    Test: blockdev write read invalid size ...passed
00:05:19.956    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.956    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.956    Test: blockdev write read max offset ...passed
00:05:19.956    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.956    Test: blockdev writev readv 8 blocks ...passed
00:05:19.956    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.956    Test: blockdev writev readv block ...passed
00:05:19.956    Test: blockdev writev readv size > 128k ...passed
00:05:19.956    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.956    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p6
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p5
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p4
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p3
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p2
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p1
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc2p0
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.957    Test: blockdev reset ...passed
00:05:19.957    Test: blockdev write read 8 blocks ...passed
00:05:19.957    Test: blockdev write read size > 128k ...passed
00:05:19.957    Test: blockdev write read invalid size ...passed
00:05:19.957    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.957    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.957    Test: blockdev write read max offset ...passed
00:05:19.957    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.957    Test: blockdev writev readv 8 blocks ...passed
00:05:19.957    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.957    Test: blockdev writev readv block ...passed
00:05:19.957    Test: blockdev writev readv size > 128k ...passed
00:05:19.957    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.957    Test: blockdev comparev and writev ...passed
00:05:19.957    Test: blockdev nvme passthru rw ...passed
00:05:19.957    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.957    Test: blockdev nvme admin passthru ...passed
00:05:19.957    Test: blockdev copy ...passed
00:05:19.957  Suite: bdevio tests on: Malloc1p1
00:05:19.957    Test: blockdev write read block ...passed
00:05:19.957    Test: blockdev write zeroes read block ...passed
00:05:19.957    Test: blockdev write zeroes read no split ...passed
00:05:19.957    Test: blockdev write zeroes read split ...passed
00:05:19.957    Test: blockdev write zeroes read split partial ...passed
00:05:19.958    Test: blockdev reset ...passed
00:05:19.958    Test: blockdev write read 8 blocks ...passed
00:05:19.958    Test: blockdev write read size > 128k ...passed
00:05:19.958    Test: blockdev write read invalid size ...passed
00:05:19.958    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.958    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.958    Test: blockdev write read max offset ...passed
00:05:19.958    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.958    Test: blockdev writev readv 8 blocks ...passed
00:05:19.958    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.958    Test: blockdev writev readv block ...passed
00:05:19.958    Test: blockdev writev readv size > 128k ...passed
00:05:19.958    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.958    Test: blockdev comparev and writev ...passed
00:05:19.958    Test: blockdev nvme passthru rw ...passed
00:05:19.958    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.958    Test: blockdev nvme admin passthru ...passed
00:05:19.958    Test: blockdev copy ...passed
00:05:19.958  Suite: bdevio tests on: Malloc1p0
00:05:19.958    Test: blockdev write read block ...passed
00:05:19.958    Test: blockdev write zeroes read block ...passed
00:05:19.958    Test: blockdev write zeroes read no split ...passed
00:05:19.958    Test: blockdev write zeroes read split ...passed
00:05:19.958    Test: blockdev write zeroes read split partial ...passed
00:05:19.958    Test: blockdev reset ...passed
00:05:19.958    Test: blockdev write read 8 blocks ...passed
00:05:19.958    Test: blockdev write read size > 128k ...passed
00:05:19.958    Test: blockdev write read invalid size ...passed
00:05:19.958    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.958    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.958    Test: blockdev write read max offset ...passed
00:05:19.958    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.958    Test: blockdev writev readv 8 blocks ...passed
00:05:19.958    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.958    Test: blockdev writev readv block ...passed
00:05:19.958    Test: blockdev writev readv size > 128k ...passed
00:05:19.958    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.958    Test: blockdev comparev and writev ...passed
00:05:19.958    Test: blockdev nvme passthru rw ...passed
00:05:19.958    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.958    Test: blockdev nvme admin passthru ...passed
00:05:19.958    Test: blockdev copy ...passed
00:05:19.958  Suite: bdevio tests on: Malloc0
00:05:19.958    Test: blockdev write read block ...passed
00:05:19.958    Test: blockdev write zeroes read block ...passed
00:05:19.958    Test: blockdev write zeroes read no split ...passed
00:05:19.958    Test: blockdev write zeroes read split ...passed
00:05:19.958    Test: blockdev write zeroes read split partial ...passed
00:05:19.958    Test: blockdev reset ...passed
00:05:19.958    Test: blockdev write read 8 blocks ...passed
00:05:19.958    Test: blockdev write read size > 128k ...passed
00:05:19.958    Test: blockdev write read invalid size ...passed
00:05:19.958    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:05:19.958    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:05:19.958    Test: blockdev write read max offset ...passed
00:05:19.958    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:05:19.958    Test: blockdev writev readv 8 blocks ...passed
00:05:19.958    Test: blockdev writev readv 30 x 1block ...passed
00:05:19.958    Test: blockdev writev readv block ...passed
00:05:19.958    Test: blockdev writev readv size > 128k ...passed
00:05:19.958    Test: blockdev writev readv size > 128k in two iovs ...passed
00:05:19.958    Test: blockdev comparev and writev ...passed
00:05:19.958    Test: blockdev nvme passthru rw ...passed
00:05:19.958    Test: blockdev nvme passthru vendor specific ...passed
00:05:19.958    Test: blockdev nvme admin passthru ...passed
00:05:19.958    Test: blockdev copy ...passed
00:05:19.958  
00:05:19.958  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:05:19.958                suites     16     16    n/a      0        0
00:05:19.958                 tests    368    368    368      0        0
00:05:19.958               asserts   2224   2224   2224      0      n/a
00:05:19.958  
00:05:19.958  Elapsed time =    0.438 seconds
00:05:19.958  0
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 48992
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 48992 ']'
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 48992
00:05:19.958    10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:05:19.958    10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@962 -- # tail -1
00:05:19.958    10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@962 -- # ps -c -o command 48992
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@962 -- # process_name=bdevio
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@964 -- # '[' bdevio = sudo ']'
00:05:19.958  killing process with pid 48992
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 48992'
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@973 -- # kill 48992
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@978 -- # wait 48992
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:05:19.958  
00:05:19.958  real	0m1.304s
00:05:19.958  user	0m2.988s
00:05:19.958  sys	0m0.445s
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.958   10:20:11 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:05:19.958  ************************************
00:05:19.958  END TEST bdev_bounds
00:05:19.958  ************************************
00:05:19.958   10:20:12 blockdev_general -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:19.958  ************************************
00:05:19.958  START TEST bdev_nbd
00:05:19.958  ************************************
00:05:19.958   10:20:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' ''
00:05:19.958    10:20:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:05:19.958   10:20:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]]
00:05:19.958   10:20:12 blockdev_general.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0
00:05:19.958  
00:05:19.958  real	0m0.003s
00:05:19.958  user	0m0.004s
00:05:19.958  sys	0m0.000s
00:05:19.958   10:20:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:19.958   10:20:12 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:05:19.958  ************************************
00:05:19.958  END TEST bdev_nbd
00:05:19.958  ************************************
00:05:19.958   10:20:12 blockdev_general -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:05:19.958   10:20:12 blockdev_general -- bdev/blockdev.sh@801 -- # '[' bdev = nvme ']'
00:05:19.958   10:20:12 blockdev_general -- bdev/blockdev.sh@801 -- # '[' bdev = gpt ']'
00:05:19.958   10:20:12 blockdev_general -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite ''
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:19.958   10:20:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:20.216  ************************************
00:05:20.216  START TEST bdev_fio
00:05:20.216  ************************************
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite ''
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev
00:05:20.216  /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT
00:05:20.216    10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # echo ''
00:05:20.216    10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=//
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@338 -- # env_context=
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO ''
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']'
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']'
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1318 -- # cat
00:05:20.216   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']'
00:05:20.216    10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]]
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc1p1]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc1p1
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p1]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p1
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p2]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p2
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p3]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p3
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p4]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p4
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p5]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p5
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p6]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p6
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_Malloc2p7]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=Malloc2p7
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_TestPT]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=TestPT
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_concat0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=concat0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid1]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid1
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}"
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_AIO0]'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=AIO0
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 			--verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:20.474   10:20:12 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:05:20.474  ************************************
00:05:20.474  START TEST bdev_fio_rw_verify
00:05:20.474  ************************************
00:05:20.474   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib=
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:05:20.475    10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:05:20.475   10:20:12 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:20.475  job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:20.475  fio-3.35
00:05:20.475  Starting 16 threads
00:05:21.043  EAL: TSC is not safe to use in SMP mode
00:05:21.043  EAL: TSC is not invariant
00:05:33.248  
00:05:33.248  job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101342: Mon Dec  9 10:20:23 2024
00:05:33.248    read: IOPS=211k, BW=824MiB/s (864MB/s)(8247MiB/10004msec)
00:05:33.248      slat (nsec): min=209, max=781190k, avg=6419.47, stdev=750161.39
00:05:33.248      clat (nsec): min=1047, max=2090.9M, avg=83659.56, stdev=4312386.92
00:05:33.248       lat (usec): min=2, max=2090.9k, avg=90.08, stdev=4486.72
00:05:33.248      clat percentiles (usec):
00:05:33.248       | 50.000th=[    26], 99.000th=[   750], 99.900th=[  1319],
00:05:33.248       | 99.990th=[ 94897], 99.999th=[235930]
00:05:33.248    write: IOPS=350k, BW=1367MiB/s (1434MB/s)(13.2GiB/9885msec); 0 zone resets
00:05:33.248      slat (nsec): min=658, max=1196.9M, avg=22538.24, stdev=1256283.57
00:05:33.248      clat (nsec): min=1207, max=1197.0M, avg=126993.37, stdev=2850509.76
00:05:33.248       lat (usec): min=10, max=1197.0k, avg=149.53, stdev=3114.90
00:05:33.248      clat percentiles (usec):
00:05:33.248       | 50.000th=[    62], 99.000th=[   717], 99.900th=[  4047],
00:05:33.248       | 99.990th=[ 94897], 99.999th=[196084]
00:05:33.248     bw (  MiB/s): min=  462, max= 2357, per=99.41%, avg=1359.32, stdev=36.91, samples=295
00:05:33.248     iops        : min=118284, max=603528, avg=347986.17, stdev=9450.06, samples=295
00:05:33.248    lat (usec)   : 2=0.01%, 4=0.69%, 10=6.42%, 20=12.84%, 50=39.01%
00:05:33.248    lat (usec)   : 100=29.80%, 250=9.48%, 500=0.13%, 750=0.76%, 1000=0.73%
00:05:33.248    lat (msec)   : 2=0.04%, 4=0.02%, 10=0.01%, 20=0.02%, 50=0.02%
00:05:33.248    lat (msec)   : 100=0.03%, 250=0.01%, 500=0.01%, 2000=0.01%, >=2000=0.01%
00:05:33.248    cpu          : usr=56.23%, sys=2.95%, ctx=632105, majf=0, minf=571
00:05:33.248    IO depths    : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0%
00:05:33.248       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:05:33.248       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:05:33.248       issued rwts: total=2111104,3460112,0,0 short=0,0,0,0 dropped=0,0,0,0
00:05:33.248       latency   : target=0, window=0, percentile=100.00%, depth=8
00:05:33.248  
00:05:33.248  Run status group 0 (all jobs):
00:05:33.248     READ: bw=824MiB/s (864MB/s), 824MiB/s-824MiB/s (864MB/s-864MB/s), io=8247MiB (8647MB), run=10004-10004msec
00:05:33.248    WRITE: bw=1367MiB/s (1434MB/s), 1367MiB/s-1367MiB/s (1434MB/s-1434MB/s), io=13.2GiB (14.2GB), run=9885-9885msec
00:05:33.248  
00:05:33.248  real	0m11.589s
00:05:33.248  user	1m34.076s
00:05:33.248  sys	0m6.823s
00:05:33.248   10:20:24 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:33.248   10:20:24 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x
00:05:33.248  ************************************
00:05:33.248  END TEST bdev_fio_rw_verify
00:05:33.248  ************************************
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' ''
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context=
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']'
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']'
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']'
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1305 -- # cat
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']'
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']'
00:05:33.248   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite
00:05:33.248    10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:05:33.250    10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "c768cce6-185b-8558-a0d0-020e682a053e"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c768cce6-185b-8558-a0d0-020e682a053e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "82b329b3-b8ee-3f5a-a988-d5bf8167c32a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "82b329b3-b8ee-3f5a-a988-d5bf8167c32a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "6745dcca-a40f-6457-9052-f0527576d2c7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6745dcca-a40f-6457-9052-f0527576d2c7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ed2ab71f-c8a6-b955-83c6-156321ecd3cf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ed2ab71f-c8a6-b955-83c6-156321ecd3cf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "17b5c89a-aa70-ff51-9979-f8033ba888a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "17b5c89a-aa70-ff51-9979-f8033ba888a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "37dc8db0-6f24-ee53-ba07-ba486fa38026"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "37dc8db0-6f24-ee53-ba07-ba486fa38026",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "c37dc109-5396-8854-a974-62e4204c5b07"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c37dc109-5396-8854-a974-62e4204c5b07",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "da33b633-1897-c75f-989d-07c4ed6095c9"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "da33b633-1897-c75f-989d-07c4ed6095c9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "34186b01-b4ec-7a50-b86c-51fbc024bb82"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "34186b01-b4ec-7a50-b86c-51fbc024bb82",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "2b17f92e-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "2b0f5e16-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "2b10966d-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "2b192311-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "2b11cf52-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2b13075a-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "2b143fcd-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2b1578c9-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "2b21aea8-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "2b21aea8-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:05:33.250   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n Malloc0
00:05:33.250  Malloc1p0
00:05:33.250  Malloc1p1
00:05:33.250  Malloc2p0
00:05:33.250  Malloc2p1
00:05:33.250  Malloc2p2
00:05:33.250  Malloc2p3
00:05:33.250  Malloc2p4
00:05:33.250  Malloc2p5
00:05:33.250  Malloc2p6
00:05:33.250  Malloc2p7
00:05:33.250  TestPT
00:05:33.250  raid0
00:05:33.250  concat0 ]]
00:05:33.250    10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name'
00:05:33.251    10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' '  "name": "Malloc0",' '  "aliases": [' '    "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Malloc disk",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b0a7c5d-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 20000,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {}' '}' '{' '  "name": "Malloc1p0",' '  "aliases": [' '    "c768cce6-185b-8558-a0d0-020e682a053e"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c768cce6-185b-8558-a0d0-020e682a053e",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc1p1",' '  "aliases": [' '    "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 32768,' '  "uuid": "c6c45de9-8ce2-cf55-bfde-cd78542eeb8d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc1",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p0",' '  "aliases": [' '    "82b329b3-b8ee-3f5a-a988-d5bf8167c32a"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "82b329b3-b8ee-3f5a-a988-d5bf8167c32a",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 0' '    }' '  }' '}' '{' '  "name": "Malloc2p1",' '  "aliases": [' '    "6745dcca-a40f-6457-9052-f0527576d2c7"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "6745dcca-a40f-6457-9052-f0527576d2c7",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 8192' '    }' '  }' '}' '{' '  "name": "Malloc2p2",' '  "aliases": [' '    "ed2ab71f-c8a6-b955-83c6-156321ecd3cf"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "ed2ab71f-c8a6-b955-83c6-156321ecd3cf",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 16384' '    }' '  }' '}' '{' '  "name": "Malloc2p3",' '  "aliases": [' '    "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "1a62a3ef-bf1f-795a-ba45-6a59d7f20f2d",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 24576' '    }' '  }' '}' '{' '  "name": "Malloc2p4",' '  "aliases": [' '    "17b5c89a-aa70-ff51-9979-f8033ba888a8"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "17b5c89a-aa70-ff51-9979-f8033ba888a8",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 32768' '    }' '  }' '}' '{' '  "name": "Malloc2p5",' '  "aliases": [' '    "37dc8db0-6f24-ee53-ba07-ba486fa38026"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "37dc8db0-6f24-ee53-ba07-ba486fa38026",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 40960' '    }' '  }' '}' '{' '  "name": "Malloc2p6",' '  "aliases": [' '    "c37dc109-5396-8854-a974-62e4204c5b07"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "c37dc109-5396-8854-a974-62e4204c5b07",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 49152' '    }' '  }' '}' '{' '  "name": "Malloc2p7",' '  "aliases": [' '    "da33b633-1897-c75f-989d-07c4ed6095c9"' '  ],' '  "product_name": "Split Disk",' '  "block_size": 512,' '  "num_blocks": 8192,' '  "uuid": "da33b633-1897-c75f-989d-07c4ed6095c9",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "split": {' '      "base_bdev": "Malloc2",' '      "offset_blocks": 57344' '    }' '  }' '}' '{' '  "name": "TestPT",' '  "aliases": [' '    "34186b01-b4ec-7a50-b86c-51fbc024bb82"' '  ],' '  "product_name": "passthru",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "34186b01-b4ec-7a50-b86c-51fbc024bb82",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": true,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "passthru": {' '      "name": "TestPT",' '      "base_bdev_name": "Malloc3"' '    }' '  }' '}' '{' '  "name": "raid0",' '  "aliases": [' '    "2b17f92e-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b17f92e-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "raid0",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc4",' '          "uuid": "2b0f5e16-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc5",' '          "uuid": "2b10966d-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "concat0",' '  "aliases": [' '    "2b192311-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 131072,' '  "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b192311-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 64,' '      "state": "online",' '      "raid_level": "concat",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc6",' '          "uuid": "2b11cf52-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc7",' '          "uuid": "2b13075a-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "raid1",' '  "aliases": [' '    "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "Raid Volume",' '  "block_size": 512,' '  "num_blocks": 65536,' '  "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": false,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "memory_domains": [' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    },' '    {' '      "dma_device_id": "system",' '      "dma_device_type": 1' '    },' '    {' '      "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' '      "dma_device_type": 2' '    }' '  ],' '  "driver_specific": {' '    "raid": {' '      "uuid": "2b1a5b21-b617-11ef-9b05-d5e34e08fe3b",' '      "strip_size_kb": 0,' '      "state": "online",' '      "raid_level": "raid1",' '      "superblock": false,' '      "num_base_bdevs": 2,' '      "num_base_bdevs_discovered": 2,' '      "num_base_bdevs_operational": 2,' '      "base_bdevs_list": [' '        {' '          "name": "Malloc8",' '          "uuid": "2b143fcd-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        },' '        {' '          "name": "Malloc9",' '          "uuid": "2b1578c9-b617-11ef-9b05-d5e34e08fe3b",' '          "is_configured": true,' '          "data_offset": 0,' '          "data_size": 65536' '        }' '      ]' '    }' '  }' '}' '{' '  "name": "AIO0",' '  "aliases": [' '    "2b21aea8-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "AIO disk",' '  "block_size": 2048,' '  "num_blocks": 5000,' '  "uuid": "2b21aea8-b617-11ef-9b05-d5e34e08fe3b",' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": false,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": false,' '    "nvme_io": false,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": false,' '    "compare_and_write": false,' '    "abort": false,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": false,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "aio": {' '      "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' '      "block_size_override": true,' '      "readonly": false,' '      "fallocate": false' '    }' '  }' '}'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc0]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc0
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p0]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p0
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc1p1]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc1p1
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p0]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p0
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p1]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p1
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p2]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p2
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p3]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p3
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p4]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p4
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p5]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p5
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p6]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p6
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_Malloc2p7]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=Malloc2p7
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_TestPT]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=TestPT
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_raid0]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=raid0
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name')
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # echo '[job_concat0]'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo filename=concat0
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- bdev/blockdev.sh@366 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']'
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:33.251   10:20:24 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:05:33.251  ************************************
00:05:33.251  START TEST bdev_fio_trim
00:05:33.251  ************************************
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local sanitizers
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:33.251   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # shift
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1347 -- # local asan_lib=
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libasan
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:05:33.252    10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1349 -- # asan_lib=
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev'
00:05:33.252   10:20:24 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output
00:05:33.252  job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8
00:05:33.252  fio-3.35
00:05:33.252  Starting 14 threads
00:05:33.252  EAL: TSC is not safe to use in SMP mode
00:05:33.252  EAL: TSC is not invariant
00:05:43.211  
00:05:43.211  job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101361: Mon Dec  9 10:20:35 2024
00:05:43.211    write: IOPS=549k, BW=2143MiB/s (2247MB/s)(20.9GiB/10003msec); 0 zone resets
00:05:43.211      slat (nsec): min=262, max=774276k, avg=6623.50, stdev=513833.12
00:05:43.211      clat (usec): min=2, max=1540.1k, avg=81.38, stdev=2857.11
00:05:43.211       lat (usec): min=2, max=1540.1k, avg=88.01, stdev=2903.15
00:05:43.211      clat percentiles (usec):
00:05:43.211       | 50.000th=[    38], 99.000th=[   742], 99.900th=[   889],
00:05:43.211       | 99.990th=[ 94897], 99.999th=[141558]
00:05:43.211     bw (  MiB/s): min=  752, max= 3357, per=100.00%, avg=2185.60, stdev=60.94, samples=258
00:05:43.211     iops        : min=192696, max=859544, avg=559514.82, stdev=15599.47, samples=258
00:05:43.211    trim: IOPS=549k, BW=2143MiB/s (2247MB/s)(20.9GiB/10003msec); 0 zone resets
00:05:43.211      slat (nsec): min=456, max=756894k, avg=5070.95, stdev=547025.54
00:05:43.211      clat (nsec): min=417, max=1540.1M, avg=66268.37, stdev=2747983.21
00:05:43.211       lat (nsec): min=1809, max=1540.1M, avg=71339.32, stdev=2801893.55
00:05:43.211      clat percentiles (usec):
00:05:43.211       | 50.000th=[    42], 99.000th=[    85], 99.900th=[   139],
00:05:43.211       | 99.990th=[ 94897], 99.999th=[130548]
00:05:43.211     bw (  MiB/s): min=  752, max= 3357, per=100.00%, avg=2185.61, stdev=60.94, samples=258
00:05:43.211     iops        : min=192696, max=859544, avg=559516.50, stdev=15599.43, samples=258
00:05:43.211    lat (nsec)   : 500=0.01%, 750=0.01%, 1000=0.01%
00:05:43.211    lat (usec)   : 2=0.02%, 4=0.17%, 10=1.26%, 20=4.18%, 50=69.14%
00:05:43.211    lat (usec)   : 100=23.99%, 250=0.35%, 500=0.01%, 750=0.39%, 1000=0.47%
00:05:43.211    lat (msec)   : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
00:05:43.211    lat (msec)   : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01%
00:05:43.211    lat (msec)   : 2000=0.01%
00:05:43.211    cpu          : usr=63.50%, sys=4.33%, ctx=767298, majf=0, minf=0
00:05:43.211    IO depths    : 1=12.4%, 2=24.8%, 4=50.1%, 8=12.8%, 16=0.0%, 32=0.0%, >=64=0.0%
00:05:43.211       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:05:43.211       complete  : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:05:43.211       issued rwts: total=0,5487385,5487388,0 short=0,0,0,0 dropped=0,0,0,0
00:05:43.211       latency   : target=0, window=0, percentile=100.00%, depth=8
00:05:43.211  
00:05:43.211  Run status group 0 (all jobs):
00:05:43.211    WRITE: bw=2143MiB/s (2247MB/s), 2143MiB/s-2143MiB/s (2247MB/s-2247MB/s), io=20.9GiB (22.5GB), run=10003-10003msec
00:05:43.211     TRIM: bw=2143MiB/s (2247MB/s), 2143MiB/s-2143MiB/s (2247MB/s-2247MB/s), io=20.9GiB (22.5GB), run=10003-10003msec
00:05:43.775  
00:05:43.775  real	0m11.547s
00:05:43.775  user	1m34.134s
00:05:43.775  sys	0m8.059s
00:05:43.775   10:20:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:43.775   10:20:35 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x
00:05:43.775  ************************************
00:05:43.775  END TEST bdev_fio_trim
00:05:43.775  ************************************
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # rm -f
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # popd
00:05:43.775  /home/vagrant/spdk_repo/spdk
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # trap - SIGINT SIGTERM EXIT
00:05:43.775  
00:05:43.775  real	0m23.681s
00:05:43.775  user	3m8.306s
00:05:43.775  sys	0m15.301s
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:43.775   10:20:35 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x
00:05:43.775  ************************************
00:05:43.775  END TEST bdev_fio
00:05:43.775  ************************************
00:05:43.775   10:20:35 blockdev_general -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:05:43.775   10:20:35 blockdev_general -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:05:43.775   10:20:35 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:05:43.775   10:20:35 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:43.775   10:20:35 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:43.775  ************************************
00:05:43.775  START TEST bdev_verify
00:05:43.775  ************************************
00:05:43.775   10:20:35 blockdev_general.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:05:43.775  [2024-12-09 10:20:35.786851] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:43.775  [2024-12-09 10:20:35.787139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:44.035  EAL: TSC is not safe to use in SMP mode
00:05:44.035  EAL: TSC is not invariant
00:05:44.035  [2024-12-09 10:20:36.091372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:44.035  [2024-12-09 10:20:36.120271] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:44.035  [2024-12-09 10:20:36.120313] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:44.035  [2024-12-09 10:20:36.120535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:44.035  [2024-12-09 10:20:36.120817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:44.035  [2024-12-09 10:20:36.178344] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:44.035  [2024-12-09 10:20:36.178386] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:44.035  [2024-12-09 10:20:36.186331] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:44.035  [2024-12-09 10:20:36.186350] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:44.358  [2024-12-09 10:20:36.194344] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:44.358  [2024-12-09 10:20:36.194364] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:05:44.358  [2024-12-09 10:20:36.194370] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:05:44.358  [2024-12-09 10:20:36.242355] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:44.358  [2024-12-09 10:20:36.242393] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:44.358  [2024-12-09 10:20:36.242400] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15122b038800
00:05:44.358  [2024-12-09 10:20:36.242406] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:44.358  [2024-12-09 10:20:36.242780] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:44.358  [2024-12-09 10:20:36.242821] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:44.358  Running I/O for 5 seconds...
00:05:46.677     162314.00 IOPS,   634.04 MiB/s
[2024-12-09T10:20:39.770Z]    161710.50 IOPS,   631.68 MiB/s
[2024-12-09T10:20:40.703Z]    169583.67 IOPS,   662.44 MiB/s
[2024-12-09T10:20:41.635Z]    170220.00 IOPS,   664.92 MiB/s
[2024-12-09T10:20:41.635Z]    166456.20 IOPS,   650.22 MiB/s
00:05:49.474                                                                                                  Latency(us)
00:05:49.474  
[2024-12-09T10:20:41.635Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:05:49.474  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x1000
00:05:49.474  	 Malloc0             :       5.03    5932.92      23.18       0.00     0.00   21541.20      70.10   54042.01
00:05:49.474  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x1000 length 0x1000
00:05:49.474  	 Malloc0             :       5.04     255.06       1.00       0.00     0.00  501284.82     850.71  916294.37
00:05:49.474  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x800
00:05:49.474  	 Malloc1p0           :       5.02    5196.89      20.30       0.00     0.00   24612.30     535.63   27424.30
00:05:49.474  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x800 length 0x800
00:05:49.474  	 Malloc1p0           :       5.02    6094.80      23.81       0.00     0.00   20985.86     523.03   26617.71
00:05:49.474  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x800
00:05:49.474  	 Malloc1p1           :       5.02    5196.54      20.30       0.00     0.00   24606.06     472.62   26617.71
00:05:49.474  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x800 length 0x800
00:05:49.474  	 Malloc1p1           :       5.02    6094.34      23.81       0.00     0.00   20980.68     472.62   26214.41
00:05:49.474  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p0           :       5.03    5196.21      20.30       0.00     0.00   24599.71     456.86   26819.36
00:05:49.474  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p0           :       5.02    6094.03      23.80       0.00     0.00   20975.47     453.71   25710.28
00:05:49.474  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p1           :       5.03    5195.86      20.30       0.00     0.00   24593.59     516.73   26617.71
00:05:49.474  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p1           :       5.02    6093.74      23.80       0.00     0.00   20970.03     519.88   25609.46
00:05:49.474  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p2           :       5.03    5195.50      20.29       0.00     0.00   24587.06     466.31   26012.76
00:05:49.474  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p2           :       5.02    6093.35      23.80       0.00     0.00   20963.84     463.16   25306.99
00:05:49.474  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p3           :       5.03    5195.17      20.29       0.00     0.00   24580.10     441.11   26012.76
00:05:49.474  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p3           :       5.02    6093.01      23.80       0.00     0.00   20958.43     444.26   25206.16
00:05:49.474  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p4           :       5.03    5194.80      20.29       0.00     0.00   24575.23     431.66   26012.76
00:05:49.474  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p4           :       5.02    6092.76      23.80       0.00     0.00   20953.48     431.66   25206.16
00:05:49.474  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p5           :       5.03    5194.42      20.29       0.00     0.00   24570.05     412.75   26214.41
00:05:49.474  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p5           :       5.02    6092.46      23.80       0.00     0.00   20947.99     406.45   25306.99
00:05:49.474  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p6           :       5.03    5194.09      20.29       0.00     0.00   24564.64     497.82   26214.41
00:05:49.474  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x200 length 0x200
00:05:49.474  	 Malloc2p6           :       5.02    6092.20      23.80       0.00     0.00   20942.93     491.52   25004.51
00:05:49.474  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.474  	 Verification LBA range: start 0x0 length 0x200
00:05:49.474  	 Malloc2p7           :       5.03    5193.71      20.29       0.00     0.00   24558.40     447.41   26012.76
00:05:49.474  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x200 length 0x200
00:05:49.475  	 Malloc2p7           :       5.02    6091.83      23.80       0.00     0.00   20937.34     447.41   25004.51
00:05:49.475  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x0 length 0x1000
00:05:49.475  	 TestPT              :       5.03    5169.27      20.19       0.00     0.00   24656.88    3680.10   26416.06
00:05:49.475  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x1000 length 0x1000
00:05:49.475  	 TestPT              :       5.04    5002.88      19.54       0.00     0.00   25469.96    3554.07   72593.74
00:05:49.475  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x0 length 0x2000
00:05:49.475  	 raid0               :       5.03    5213.35      20.36       0.00     0.00   24433.53     466.31   23895.44
00:05:49.475  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x2000 length 0x2000
00:05:49.475  	 raid0               :       5.03    6108.91      23.86       0.00     0.00   20851.56     472.62   22080.60
00:05:49.475  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x0 length 0x2000
00:05:49.475  	 concat0             :       5.03    5213.13      20.36       0.00     0.00   24427.21     507.27   24601.21
00:05:49.475  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x2000 length 0x2000
00:05:49.475  	 concat0             :       5.03    6108.53      23.86       0.00     0.00   20846.84     507.27   22887.19
00:05:49.475  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x0 length 0x1000
00:05:49.475  	 raid1               :       5.03    5212.86      20.36       0.00     0.00   24420.61     557.69   25306.99
00:05:49.475  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x1000 length 0x1000
00:05:49.475  	 raid1               :       5.03    6108.03      23.86       0.00     0.00   20841.99     560.84   23492.14
00:05:49.475  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x0 length 0x4e2
00:05:49.475  	 AIO0                :       5.09    1203.51       4.70       0.00     0.00  105414.88    9074.22  248431.92
00:05:49.475  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:05:49.475  	 Verification LBA range: start 0x4e2 length 0x4e2
00:05:49.475  	 AIO0                :       5.09    1222.21       4.77       0.00     0.00  103958.04    1279.21  253271.51
00:05:49.475  
[2024-12-09T10:20:41.636Z]  ===================================================================================================================
00:05:49.475  
[2024-12-09T10:20:41.636Z]  Total                       :             165636.41     647.02       0.00     0.00   24669.09      70.10  916294.37
00:05:49.475  
00:05:49.475  real	0m5.797s
00:05:49.475  user	0m9.917s
00:05:49.475  sys	0m0.416s
00:05:49.475  ************************************
00:05:49.475  END TEST bdev_verify
00:05:49.475  ************************************
00:05:49.475   10:20:41 blockdev_general.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:49.475   10:20:41 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:05:49.475   10:20:41 blockdev_general -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:05:49.475   10:20:41 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:05:49.475   10:20:41 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:49.475   10:20:41 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:49.475  ************************************
00:05:49.475  START TEST bdev_verify_big_io
00:05:49.475  ************************************
00:05:49.475   10:20:41 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:05:49.475  [2024-12-09 10:20:41.613057] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:49.475  [2024-12-09 10:20:41.613206] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:50.044  EAL: TSC is not safe to use in SMP mode
00:05:50.044  EAL: TSC is not invariant
00:05:50.044  [2024-12-09 10:20:41.927900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:05:50.044  [2024-12-09 10:20:41.956192] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:50.044  [2024-12-09 10:20:41.956229] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:50.044  [2024-12-09 10:20:41.956590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:50.044  [2024-12-09 10:20:41.956466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:50.044  [2024-12-09 10:20:42.014236] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:50.044  [2024-12-09 10:20:42.014271] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:50.044  [2024-12-09 10:20:42.022223] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:50.044  [2024-12-09 10:20:42.022243] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:50.044  [2024-12-09 10:20:42.030234] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:50.044  [2024-12-09 10:20:42.030254] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:05:50.044  [2024-12-09 10:20:42.030260] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:05:50.044  [2024-12-09 10:20:42.078248] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:50.044  [2024-12-09 10:20:42.078277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:50.044  [2024-12-09 10:20:42.078285] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28acab838800
00:05:50.044  [2024-12-09 10:20:42.078291] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:50.044  [2024-12-09 10:20:42.078640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:50.044  [2024-12-09 10:20:42.078668] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:50.044  [2024-12-09 10:20:42.178981] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:05:50.044  [2024-12-09 10:20:42.179086] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32
00:05:50.044  [2024-12-09 10:20:42.179144] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179201] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179260] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179321] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179378] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179436] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179493] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179558] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179619] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179680] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179742] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179803] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179859] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.179935] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32
00:05:50.045  [2024-12-09 10:20:42.180771] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:05:50.045  [2024-12-09 10:20:42.180878] bdevperf.c:1968:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78
00:05:50.045  Running I/O for 5 seconds...
00:05:54.565      16592.00 IOPS,  1037.00 MiB/s
[2024-12-09T10:20:47.672Z]     22448.00 IOPS,  1403.00 MiB/s
[2024-12-09T10:20:47.672Z]     19713.33 IOPS,  1232.08 MiB/s
[2024-12-09T10:20:47.672Z]     17599.50 IOPS,  1099.97 MiB/s
00:05:55.511                                                                                                  Latency(us)
00:05:55.511  
[2024-12-09T10:20:47.673Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:05:55.512  Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x100
00:05:55.512  	 Malloc0             :       5.07    2500.56     156.28       0.00     0.00   50949.41      71.68  183904.15
00:05:55.512  Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x100 length 0x100
00:05:55.512  	 Malloc0             :       5.10    1883.59     117.72       0.00     0.00   67704.10      58.68  203262.48
00:05:55.512  Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x80
00:05:55.512  	 Malloc1p0           :       5.18     327.73      20.48       0.00     0.00  384885.14     288.30  532354.12
00:05:55.512  Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x80 length 0x80
00:05:55.512  	 Malloc1p0           :       5.12     916.22      57.26       0.00     0.00  138613.56     633.30  187937.14
00:05:55.512  Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x80
00:05:55.512  	 Malloc1p1           :       5.18     330.49      20.66       0.00     0.00  380924.30     278.84  516222.18
00:05:55.512  Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x80 length 0x80
00:05:55.512  	 Malloc1p1           :       5.13     261.74      16.36       0.00     0.00  484244.77     241.03  490411.07
00:05:55.512  Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p0           :       5.12     315.90      19.74       0.00     0.00   99770.07     186.68  197616.30
00:05:55.512  Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p0           :       5.11     244.20      15.26       0.00     0.00  129586.23     171.72  136314.92
00:05:55.512  Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p1           :       5.12     315.89      19.74       0.00     0.00   99680.18     193.77  195196.51
00:05:55.512  Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p1           :       5.11     244.19      15.26       0.00     0.00  129474.07     175.66  133895.13
00:05:55.512  Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p2           :       5.12     315.88      19.74       0.00     0.00   99597.18     183.53  192776.72
00:05:55.512  Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p2           :       5.11     244.18      15.26       0.00     0.00  129405.62     174.08  132281.93
00:05:55.512  Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p3           :       5.12     315.86      19.74       0.00     0.00   99492.50     186.68  189550.33
00:05:55.512  Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p3           :       5.11     244.17      15.26       0.00     0.00  129318.86     170.93  130668.74
00:05:55.512  Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p4           :       5.13     318.18      19.89       0.00     0.00   98715.31     206.38  186323.94
00:05:55.512  Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p4           :       5.11     244.16      15.26       0.00     0.00  129250.51     181.96  129055.54
00:05:55.512  Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p5           :       5.13     318.16      19.89       0.00     0.00   98619.56     192.20  183904.15
00:05:55.512  Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p5           :       5.11     244.15      15.26       0.00     0.00  129141.16     178.02  127442.35
00:05:55.512  Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p6           :       5.13     318.15      19.88       0.00     0.00   98524.31     190.62  181484.36
00:05:55.512  Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p6           :       5.11     244.14      15.26       0.00     0.00  129070.18     174.87  125829.16
00:05:55.512  Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x20
00:05:55.512  	 Malloc2p7           :       5.13     318.14      19.88       0.00     0.00   98426.25     181.96  179064.57
00:05:55.512  Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x20 length 0x20
00:05:55.512  	 Malloc2p7           :       5.11     244.13      15.26       0.00     0.00  129005.11     171.72  124215.96
00:05:55.512  Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x100
00:05:55.512  	 TestPT              :       5.24     329.97      20.62       0.00     0.00  376109.65     218.98  477505.51
00:05:55.512  Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x100 length 0x100
00:05:55.512  	 TestPT              :       5.30     238.62      14.91       0.00     0.00  521244.47   12754.32  480731.90
00:05:55.512  Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x200
00:05:55.512  	 raid0               :       5.17     340.42      21.28       0.00     0.00  365762.36     297.75  461373.57
00:05:55.512  Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x200 length 0x200
00:05:55.512  	 raid0               :       5.14     261.73      16.36       0.00     0.00  478934.55     286.72  477505.51
00:05:55.512  Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x200
00:05:55.512  	 concat0             :       5.17     362.02      22.63       0.00     0.00  343093.53     308.78  445241.63
00:05:55.512  Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x200 length 0x200
00:05:55.512  	 concat0             :       5.14     264.80      16.55       0.00     0.00  472459.68     250.49  477505.51
00:05:55.512  Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x100
00:05:55.512  	 raid1               :       5.18     372.47      23.28       0.00     0.00  332509.57     392.27  429109.69
00:05:55.512  Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x100 length 0x100
00:05:55.512  	 raid1               :       5.13     268.02      16.75       0.00     0.00  465481.49     315.08  477505.51
00:05:55.512  Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x0 length 0x4e
00:05:55.512  	 AIO0                :       5.18     360.12      22.51       0.00     0.00  209035.02     560.84  271016.64
00:05:55.512  Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536)
00:05:55.512  	 Verification LBA range: start 0x4e length 0x4e
00:05:55.512  	 AIO0                :       5.14     271.23      16.95       0.00     0.00  280004.48     475.77  275856.23
00:05:55.512  
[2024-12-09T10:20:47.673Z]  ===================================================================================================================
00:05:55.512  
[2024-12-09T10:20:47.673Z]  Total                       :              13779.20     861.20       0.00     0.00  176002.87      58.68  532354.12
00:05:55.512  
00:05:55.512  real	0m6.016s
00:05:55.512  user	0m11.229s
00:05:55.512  sys	0m0.378s
00:05:55.512   10:20:47 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:55.512   10:20:47 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:05:55.512  ************************************
00:05:55.512  END TEST bdev_verify_big_io
00:05:55.512  ************************************
00:05:55.771   10:20:47 blockdev_general -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:55.771   10:20:47 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:05:55.771   10:20:47 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:55.771   10:20:47 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:55.771  ************************************
00:05:55.771  START TEST bdev_write_zeroes
00:05:55.771  ************************************
00:05:55.771   10:20:47 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:55.771  [2024-12-09 10:20:47.683877] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:55.771  [2024-12-09 10:20:47.684003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:56.029  EAL: TSC is not safe to use in SMP mode
00:05:56.029  EAL: TSC is not invariant
00:05:56.029  [2024-12-09 10:20:47.986736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:56.029  [2024-12-09 10:20:48.026806] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:56.029  [2024-12-09 10:20:48.026936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:56.029  [2024-12-09 10:20:48.088322] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:56.029  [2024-12-09 10:20:48.088376] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1
00:05:56.029  [2024-12-09 10:20:48.096303] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:56.029  [2024-12-09 10:20:48.096336] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2
00:05:56.029  [2024-12-09 10:20:48.104319] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:56.029  [2024-12-09 10:20:48.104352] bdev.c:8670:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3
00:05:56.029  [2024-12-09 10:20:48.104364] vbdev_passthru.c: 737:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival
00:05:56.029  [2024-12-09 10:20:48.152340] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc3
00:05:56.029  [2024-12-09 10:20:48.152392] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened
00:05:56.029  [2024-12-09 10:20:48.152403] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2fe74f238800
00:05:56.029  [2024-12-09 10:20:48.152412] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed
00:05:56.029  [2024-12-09 10:20:48.152847] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered
00:05:56.029  [2024-12-09 10:20:48.152887] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT
00:05:56.289  Running I/O for 1 seconds...
00:05:57.230     688900.00 IOPS,  2691.02 MiB/s
00:05:57.230                                                                                                  Latency(us)
00:05:57.230  
[2024-12-09T10:20:49.391Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:05:57.230  Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.230  	 Malloc0             :       1.01   45767.09     178.78       0.00     0.00    2796.30     106.34    6024.27
00:05:57.230  Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.230  	 Malloc1p0           :       1.01   45763.27     178.76       0.00     0.00    2796.05     147.30    5797.42
00:05:57.231  Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc1p1           :       1.01   45757.96     178.74       0.00     0.00    2795.45     133.12    5595.77
00:05:57.231  Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p0           :       1.01   45754.59     178.73       0.00     0.00    2794.64     136.27    5268.09
00:05:57.231  Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p1           :       1.01   45749.41     178.71       0.00     0.00    2794.45     130.76    5041.23
00:05:57.231  Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p2           :       1.01   45744.65     178.69       0.00     0.00    2793.91     129.97    4940.41
00:05:57.231  Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p3           :       1.01   45740.79     178.67       0.00     0.00    2793.58     130.76    4839.58
00:05:57.231  Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p4           :       1.01   45734.70     178.65       0.00     0.00    2793.13     130.76    4738.76
00:05:57.231  Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p5           :       1.01   45730.35     178.63       0.00     0.00    2792.56     132.33    4789.17
00:05:57.231  Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p6           :       1.01   45726.72     178.62       0.00     0.00    2791.95     131.54    4915.20
00:05:57.231  Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 Malloc2p7           :       1.01   45720.55     178.60       0.00     0.00    2791.59     132.33    5116.85
00:05:57.231  Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 TestPT              :       1.01   45716.96     178.58       0.00     0.00    2790.75     133.91    5318.50
00:05:57.231  Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 raid0               :       1.01   45712.52     178.56       0.00     0.00    2790.34     220.55    5469.74
00:05:57.231  Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 concat0             :       1.01   45708.03     178.55       0.00     0.00    2789.26     206.38    5646.18
00:05:57.231  Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 raid1               :       1.01   45701.73     178.52       0.00     0.00    2788.17     341.86    5923.45
00:05:57.231  Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:05:57.231  	 AIO0                :       1.08    1561.82       6.10       0.00     0.00   77846.75     570.29  380713.86
00:05:57.231  
[2024-12-09T10:20:49.392Z]  ===================================================================================================================
00:05:57.231  
[2024-12-09T10:20:49.392Z]  Total                       :             687591.12    2685.90       0.00     0.00    2975.76     106.34  380713.86
00:05:57.490  
00:05:57.490  real	0m1.785s
00:05:57.490  user	0m1.381s
00:05:57.490  sys	0m0.355s
00:05:57.490   10:20:49 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:57.490  ************************************
00:05:57.490  END TEST bdev_write_zeroes
00:05:57.490  ************************************
00:05:57.490   10:20:49 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:05:57.490   10:20:49 blockdev_general -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:57.490   10:20:49 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:05:57.490   10:20:49 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:57.490   10:20:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:57.490  ************************************
00:05:57.490  START TEST bdev_json_nonenclosed
00:05:57.490  ************************************
00:05:57.490   10:20:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:57.490  [2024-12-09 10:20:49.544609] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:57.490  [2024-12-09 10:20:49.544871] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:57.750  EAL: TSC is not safe to use in SMP mode
00:05:57.750  EAL: TSC is not invariant
00:05:57.750  [2024-12-09 10:20:49.878168] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:58.068  [2024-12-09 10:20:49.912520] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:58.068  [2024-12-09 10:20:49.912636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:58.068  [2024-12-09 10:20:49.912665] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:05:58.068  [2024-12-09 10:20:49.912674] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:05:58.068  [2024-12-09 10:20:49.912704] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:58.068  
00:05:58.068  real	0m0.438s
00:05:58.068  user	0m0.078s
00:05:58.068  sys	0m0.360s
00:05:58.068   10:20:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:58.068   10:20:49 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:05:58.068  ************************************
00:05:58.068  END TEST bdev_json_nonenclosed
00:05:58.068  ************************************
00:05:58.068   10:20:50 blockdev_general -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:58.068   10:20:50 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:05:58.068   10:20:50 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:58.068   10:20:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:58.068  ************************************
00:05:58.068  START TEST bdev_json_nonarray
00:05:58.068  ************************************
00:05:58.068   10:20:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:05:58.068  [2024-12-09 10:20:50.049365] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:58.068  [2024-12-09 10:20:50.049547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:58.341  EAL: TSC is not safe to use in SMP mode
00:05:58.341  EAL: TSC is not invariant
00:05:58.341  [2024-12-09 10:20:50.397848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:58.341  [2024-12-09 10:20:50.426305] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:05:58.341  [2024-12-09 10:20:50.426383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:05:58.341  [2024-12-09 10:20:50.426418] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:05:58.341  [2024-12-09 10:20:50.426426] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:05:58.341  [2024-12-09 10:20:50.426433] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:05:58.341  
00:05:58.341  real	0m0.433s
00:05:58.341  user	0m0.040s
00:05:58.341  sys	0m0.379s
00:05:58.341   10:20:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:05:58.341  ************************************
00:05:58.341  END TEST bdev_json_nonarray
00:05:58.341  ************************************
00:05:58.341   10:20:50 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:05:58.603   10:20:50 blockdev_general -- bdev/blockdev.sh@824 -- # [[ bdev == bdev ]]
00:05:58.603   10:20:50 blockdev_general -- bdev/blockdev.sh@825 -- # run_test bdev_qos qos_test_suite ''
00:05:58.603   10:20:50 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:05:58.603   10:20:50 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:05:58.603   10:20:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:05:58.603  ************************************
00:05:58.603  START TEST bdev_qos
00:05:58.603  ************************************
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1129 -- # qos_test_suite ''
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # QOS_PID=49389
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # echo 'Process qos testing pid: 49389'
00:05:58.603  Process qos testing pid: 49389
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # waitforlisten 49389
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@444 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 ''
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@835 -- # '[' -z 49389 ']'
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@840 -- # local max_retries=100
00:05:58.603  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@844 -- # xtrace_disable
00:05:58.603   10:20:50 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:58.603  [2024-12-09 10:20:50.560608] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:05:58.603  [2024-12-09 10:20:50.560788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:05:58.866  EAL: TSC is not safe to use in SMP mode
00:05:58.866  EAL: TSC is not invariant
00:05:58.866  [2024-12-09 10:20:50.867231] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:05:58.866  [2024-12-09 10:20:50.893118] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:05:58.866  [2024-12-09 10:20:50.893158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@868 -- # return 0
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@450 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439  Malloc_0
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # waitforbdev Malloc_0
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_0
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439  [
00:05:59.439  {
00:05:59.439  "name": "Malloc_0",
00:05:59.439  "aliases": [
00:05:59.439  "44142d3c-b617-11ef-9b05-d5e34e08fe3b"
00:05:59.439  ],
00:05:59.439  "product_name": "Malloc disk",
00:05:59.439  "block_size": 512,
00:05:59.439  "num_blocks": 262144,
00:05:59.439  "uuid": "44142d3c-b617-11ef-9b05-d5e34e08fe3b",
00:05:59.439  "assigned_rate_limits": {
00:05:59.439  "rw_ios_per_sec": 0,
00:05:59.439  "rw_mbytes_per_sec": 0,
00:05:59.439  "r_mbytes_per_sec": 0,
00:05:59.439  "w_mbytes_per_sec": 0
00:05:59.439  },
00:05:59.439  "claimed": false,
00:05:59.439  "zoned": false,
00:05:59.439  "supported_io_types": {
00:05:59.439  "read": true,
00:05:59.439  "write": true,
00:05:59.439  "unmap": true,
00:05:59.439  "flush": true,
00:05:59.439  "reset": true,
00:05:59.439  "nvme_admin": false,
00:05:59.439  "nvme_io": false,
00:05:59.439  "nvme_io_md": false,
00:05:59.439  "write_zeroes": true,
00:05:59.439  "zcopy": true,
00:05:59.439  "get_zone_info": false,
00:05:59.439  "zone_management": false,
00:05:59.439  "zone_append": false,
00:05:59.439  "compare": false,
00:05:59.439  "compare_and_write": false,
00:05:59.439  "abort": true,
00:05:59.439  "seek_hole": false,
00:05:59.439  "seek_data": false,
00:05:59.439  "copy": true,
00:05:59.439  "nvme_iov_md": false
00:05:59.439  },
00:05:59.439  "memory_domains": [
00:05:59.439  {
00:05:59.439  "dma_device_id": "system",
00:05:59.439  "dma_device_type": 1
00:05:59.439  },
00:05:59.439  {
00:05:59.439  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:05:59.439  "dma_device_type": 2
00:05:59.439  }
00:05:59.439  ],
00:05:59.439  "driver_specific": {}
00:05:59.439  }
00:05:59.439  ]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # rpc_cmd bdev_null_create Null_1 128 512
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439  Null_1
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # waitforbdev Null_1
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@903 -- # local bdev_name=Null_1
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # local i
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:05:59.439   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:05:59.439  [
00:05:59.439  {
00:05:59.439  "name": "Null_1",
00:05:59.439  "aliases": [
00:05:59.439  "44187277-b617-11ef-9b05-d5e34e08fe3b"
00:05:59.439  ],
00:05:59.439  "product_name": "Null disk",
00:05:59.439  "block_size": 512,
00:05:59.439  "num_blocks": 262144,
00:05:59.439  "uuid": "44187277-b617-11ef-9b05-d5e34e08fe3b",
00:05:59.439  "assigned_rate_limits": {
00:05:59.439  "rw_ios_per_sec": 0,
00:05:59.439  "rw_mbytes_per_sec": 0,
00:05:59.439  "r_mbytes_per_sec": 0,
00:05:59.439  "w_mbytes_per_sec": 0
00:05:59.439  },
00:05:59.439  "claimed": false,
00:05:59.439  "zoned": false,
00:05:59.439  "supported_io_types": {
00:05:59.439  "read": true,
00:05:59.439  "write": true,
00:05:59.439  "unmap": false,
00:05:59.439  "flush": false,
00:05:59.439  "reset": true,
00:05:59.439  "nvme_admin": false,
00:05:59.439  "nvme_io": false,
00:05:59.439  "nvme_io_md": false,
00:05:59.439  "write_zeroes": true,
00:05:59.439  "zcopy": false,
00:05:59.439  "get_zone_info": false,
00:05:59.439  "zone_management": false,
00:05:59.439  "zone_append": false,
00:05:59.440  "compare": false,
00:05:59.440  "compare_and_write": false,
00:05:59.440  "abort": true,
00:05:59.440  "seek_hole": false,
00:05:59.440  "seek_data": false,
00:05:59.440  "copy": false,
00:05:59.440  "nvme_iov_md": false
00:05:59.440  },
00:05:59.440  "driver_specific": {}
00:05:59.440  }
00:05:59.440  ]
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- common/autotest_common.sh@911 -- # return 0
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # qos_function_test
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@409 -- # local qos_lower_iops_limit=1000
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_bw_limit=2
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local io_result=0
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local iops_limit=0
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local bw_limit=0
00:05:59.440   10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@455 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:05:59.440    10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # get_io_result IOPS Malloc_0
00:05:59.440    10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:05:59.440    10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:05:59.440    10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:05:59.440     10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:05:59.440     10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:05:59.440     10:20:51 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:05:59.440  Running I/O for 60 seconds...
00:06:01.760    1201152.00 IOPS,  4692.00 MiB/s
[2024-12-09T10:20:54.863Z]   1202944.00 IOPS,  4699.00 MiB/s
[2024-12-09T10:20:55.803Z]   1200128.00 IOPS,  4688.00 MiB/s
[2024-12-09T10:20:56.747Z]   1234816.00 IOPS,  4823.50 MiB/s
[2024-12-09T10:20:57.008Z]   1252352.00 IOPS,  4892.00 MiB/s
[2024-12-09T10:20:57.008Z]   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  672471.70  2689886.80  0.00       0.00       2882560.00  0.00     0.00   '
00:06:04.847    10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:06:04.847     10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:06:04.847    10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # iostat_result=672471.70
00:06:04.847    10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 672471
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@415 -- # io_result=672471
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@417 -- # iops_limit=168000
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # '[' 168000 -gt 1000 ']'
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@421 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 168000 Malloc_0
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # run_test bdev_qos_iops run_qos_test 168000 IOPS Malloc_0
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:04.847   10:20:56 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:04.847  ************************************
00:06:04.847  START TEST bdev_qos_iops
00:06:04.847  ************************************
00:06:04.847   10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1129 -- # run_qos_test 168000 IOPS Malloc_0
00:06:04.847   10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@388 -- # local qos_limit=168000
00:06:04.847   10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_result=0
00:06:04.847    10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # get_io_result IOPS Malloc_0
00:06:04.847    10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@374 -- # local limit_type=IOPS
00:06:04.847    10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:06:04.847    10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local iostat_result
00:06:04.847     10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:06:04.847     10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:06:04.847     10:20:56 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # tail -1
00:06:06.718    1184821.33 IOPS,  4628.21 MiB/s
[2024-12-09T10:20:59.813Z]   1176216.00 IOPS,  4594.59 MiB/s
[2024-12-09T10:21:00.750Z]   1170285.00 IOPS,  4571.43 MiB/s
[2024-12-09T10:21:01.684Z]   1163311.11 IOPS,  4544.18 MiB/s
[2024-12-09T10:21:02.618Z]   1156801.60 IOPS,  4518.76 MiB/s
[2024-12-09T10:21:02.618Z]   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  167887.27  671549.06   0.00       0.00       704256.00   0.00     0.00   '
00:06:10.457    10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # '[' IOPS = IOPS ']'
00:06:10.457     10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # awk '{print $2}'
00:06:10.457    10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # iostat_result=167887.27
00:06:10.457    10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@384 -- # echo 167887
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@391 -- # qos_result=167887
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # '[' IOPS = BANDWIDTH ']'
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@395 -- # lower_limit=151200
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # upper_limit=184800
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 167887 -lt 151200 ']'
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@399 -- # '[' 167887 -gt 184800 ']'
00:06:10.457  
00:06:10.457  real	0m5.372s
00:06:10.457  user	0m0.093s
00:06:10.457  sys	0m0.024s
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:10.457   10:21:02 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x
00:06:10.457  ************************************
00:06:10.457  END TEST bdev_qos_iops
00:06:10.457  ************************************
00:06:10.457    10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # get_io_result BANDWIDTH Null_1
00:06:10.457    10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:06:10.457    10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:06:10.457    10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local iostat_result
00:06:10.457     10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # grep Null_1
00:06:10.457     10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:06:10.457     10:21:02 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # tail -1
00:06:11.831    1150832.00 IOPS,  4495.44 MiB/s
[2024-12-09T10:21:04.928Z]   1144356.67 IOPS,  4470.14 MiB/s
[2024-12-09T10:21:05.861Z]   1138923.69 IOPS,  4448.92 MiB/s
[2024-12-09T10:21:06.796Z]   1131341.14 IOPS,  4419.30 MiB/s
[2024-12-09T10:21:07.728Z]   1121908.27 IOPS,  4382.45 MiB/s
[2024-12-09T10:21:07.986Z]   1115286.00 IOPS,  4356.59 MiB/s
[2024-12-09T10:21:07.986Z]   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    829582.67  3318330.68  0.00       0.00       3571712.00  0.00     0.00   '
00:06:15.825    10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:06:15.825    10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:06:15.825     10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:06:15.825    10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # iostat_result=3571712.00
00:06:15.825    10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@384 -- # echo 3571712
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@426 -- # bw_limit=3571712
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=348
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # '[' 348 -lt 2 ']'
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@431 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 348 Null_1
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # run_test bdev_qos_bw run_qos_test 348 BANDWIDTH Null_1
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:15.825   10:21:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:15.825  ************************************
00:06:15.825  START TEST bdev_qos_bw
00:06:15.825  ************************************
00:06:15.825   10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1129 -- # run_qos_test 348 BANDWIDTH Null_1
00:06:15.825   10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@388 -- # local qos_limit=348
00:06:15.825   10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:06:15.825    10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Null_1
00:06:15.825    10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:06:15.825    10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Null_1
00:06:15.825    10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:06:15.825     10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:06:15.825     10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # tail -1
00:06:15.825     10:21:07 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # grep Null_1
00:06:17.686    1074138.94 IOPS,  4195.86 MiB/s
[2024-12-09T10:21:10.782Z]   1028751.67 IOPS,  4018.56 MiB/s
[2024-12-09T10:21:11.716Z]    988137.79 IOPS,  3859.91 MiB/s
[2024-12-09T10:21:12.661Z]    951594.15 IOPS,  3717.16 MiB/s
[2024-12-09T10:21:13.594Z]    918526.19 IOPS,  3587.99 MiB/s
[2024-12-09T10:21:13.594Z]   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # iostat_result='Null_1    89199.58   356798.33  0.00       0.00       385172.00  0.00     0.00   '
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:06:21.433     10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # iostat_result=385172.00
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@384 -- # echo 385172
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@391 -- # qos_result=385172
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # qos_limit=356352
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@395 -- # lower_limit=320716
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # upper_limit=391987
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 385172 -lt 320716 ']'
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@399 -- # '[' 385172 -gt 391987 ']'
00:06:21.433  
00:06:21.433  real	0m5.497s
00:06:21.433  user	0m0.092s
00:06:21.433  sys	0m0.025s
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x
00:06:21.433  ************************************
00:06:21.433  END TEST bdev_qos_bw
00:06:21.433  ************************************
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@435 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:21.433   10:21:13 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:21.433  ************************************
00:06:21.433  START TEST bdev_qos_ro_bw
00:06:21.433  ************************************
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1129 -- # run_qos_test 2 BANDWIDTH Malloc_0
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@388 -- # local qos_limit=2
00:06:21.433   10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_result=0
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # get_io_result BANDWIDTH Malloc_0
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@374 -- # local limit_type=BANDWIDTH
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local qos_dev=Malloc_0
00:06:21.433    10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local iostat_result
00:06:21.433     10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5
00:06:21.433     10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # grep Malloc_0
00:06:21.433     10:21:13 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # tail -1
00:06:22.804     886539.27 IOPS,  3463.04 MiB/s
[2024-12-09T10:21:15.909Z]    851889.74 IOPS,  3327.69 MiB/s
[2024-12-09T10:21:16.843Z]    820127.62 IOPS,  3203.62 MiB/s
[2024-12-09T10:21:17.777Z]    790906.52 IOPS,  3089.48 MiB/s
[2024-12-09T10:21:18.710Z]    763933.19 IOPS,  2984.11 MiB/s
[2024-12-09T10:21:18.968Z]    738957.89 IOPS,  2886.55 MiB/s
[2024-12-09T10:21:18.968Z]   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # iostat_result='Malloc_0  512.22    2048.87    0.00       0.00       2172.00    0.00     0.00   '
00:06:26.807    10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # '[' BANDWIDTH = IOPS ']'
00:06:26.807    10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@380 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:06:26.807     10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # awk '{print $6}'
00:06:26.807    10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # iostat_result=2172.00
00:06:26.807    10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@384 -- # echo 2172
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@391 -- # qos_result=2172
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # '[' BANDWIDTH = BANDWIDTH ']'
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # qos_limit=2048
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@395 -- # lower_limit=1843
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # upper_limit=2252
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2172 -lt 1843 ']'
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@399 -- # '[' 2172 -gt 2252 ']'
00:06:26.807  
00:06:26.807  real	0m5.434s
00:06:26.807  user	0m0.104s
00:06:26.807  sys	0m0.016s
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:26.807   10:21:18 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x
00:06:26.807  ************************************
00:06:26.807  END TEST bdev_qos_ro_bw
00:06:26.807  ************************************
00:06:26.808   10:21:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_malloc_delete Malloc_0
00:06:26.808   10:21:18 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:26.808   10:21:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_null_delete Null_1
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:27.375  
00:06:27.375                                                                                                  Latency(us)
00:06:27.375  
[2024-12-09T10:21:19.536Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:27.375  Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:06:27.375  	 Malloc_0            :      27.72  220353.11     860.75       0.00     0.00    1150.41     319.80  503316.62
00:06:27.375  Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:06:27.375  	 Null_1              :      27.74  501390.41    1958.56       0.00     0.00     510.37      55.14   13006.38
00:06:27.375  
[2024-12-09T10:21:19.536Z]  ===================================================================================================================
00:06:27.375  
[2024-12-09T10:21:19.536Z]  Total                       :             721743.53    2819.31       0.00     0.00     705.70      55.14  503316.62
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # killprocess 49389
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@954 -- # '[' -z 49389 ']'
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # kill -0 49389
00:06:27.375  {
00:06:27.375    "results": [
00:06:27.375      {
00:06:27.375        "job": "Malloc_0",
00:06:27.375        "core_mask": "0x2",
00:06:27.375        "workload": "randread",
00:06:27.375        "status": "finished",
00:06:27.375        "queue_depth": 256,
00:06:27.375        "io_size": 4096,
00:06:27.375        "runtime": 27.723511,
00:06:27.375        "iops": 220353.11472634185,
00:06:27.375        "mibps": 860.7543543997729,
00:06:27.375        "io_failed": 0,
00:06:27.375        "io_timeout": 0,
00:06:27.375        "avg_latency_us": 1150.4113348767808,
00:06:27.375        "min_latency_us": 319.8031680669798,
00:06:27.375        "max_latency_us": 503316.6234452377
00:06:27.375      },
00:06:27.375      {
00:06:27.375        "job": "Null_1",
00:06:27.375        "core_mask": "0x2",
00:06:27.375        "workload": "randread",
00:06:27.375        "status": "finished",
00:06:27.375        "queue_depth": 256,
00:06:27.375        "io_size": 4096,
00:06:27.375        "runtime": 27.739976,
00:06:27.375        "iops": 501390.41216185625,
00:06:27.375        "mibps": 1958.556297507251,
00:06:27.375        "io_failed": 0,
00:06:27.375        "io_timeout": 0,
00:06:27.375        "avg_latency_us": 510.37023119365244,
00:06:27.375        "min_latency_us": 55.13847725292756,
00:06:27.375        "max_latency_us": 13006.379091433426
00:06:27.375      }
00:06:27.375    ],
00:06:27.375    "core_count": 1
00:06:27.375  }
00:06:27.375    10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # uname
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:06:27.375    10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@962 -- # ps -c -o command 49389
00:06:27.375    10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@962 -- # tail -1
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@962 -- # process_name=bdevperf
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@964 -- # '[' bdevperf = sudo ']'
00:06:27.375  killing process with pid 49389
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49389'
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@973 -- # kill 49389
00:06:27.375  Received shutdown signal, test time was about 27.752266 seconds
00:06:27.375  
00:06:27.375                                                                                                  Latency(us)
00:06:27.375  
[2024-12-09T10:21:19.536Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:27.375  
[2024-12-09T10:21:19.536Z]  ===================================================================================================================
00:06:27.375  
[2024-12-09T10:21:19.536Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@978 -- # wait 49389
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # trap - SIGINT SIGTERM EXIT
00:06:27.375  
00:06:27.375  real	0m28.854s
00:06:27.375  user	0m29.390s
00:06:27.375  sys	0m0.567s
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:27.375   10:21:19 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x
00:06:27.375  ************************************
00:06:27.375  END TEST bdev_qos
00:06:27.375  ************************************
00:06:27.375   10:21:19 blockdev_general -- bdev/blockdev.sh@826 -- # run_test bdev_qd_sampling qd_sampling_test_suite ''
00:06:27.375   10:21:19 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:27.375   10:21:19 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:27.375   10:21:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:06:27.375  ************************************
00:06:27.375  START TEST bdev_qd_sampling
00:06:27.375  ************************************
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1129 -- # qd_sampling_test_suite ''
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@537 -- # QD_DEV=Malloc_QD
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # QD_PID=49610
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # echo 'Process bdev QD sampling period testing pid: 49610'
00:06:27.375  Process bdev QD sampling period testing pid: 49610
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # waitforlisten 49610
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@835 -- # '[' -z 49610 ']'
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@539 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C ''
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:27.375  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:27.375   10:21:19 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:27.375  [2024-12-09 10:21:19.446517] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:27.375  [2024-12-09 10:21:19.446684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:27.634  EAL: TSC is not safe to use in SMP mode
00:06:27.634  EAL: TSC is not invariant
00:06:27.634  [2024-12-09 10:21:19.752465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:27.634  [2024-12-09 10:21:19.782492] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:27.634  [2024-12-09 10:21:19.782529] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:06:27.634  [2024-12-09 10:21:19.782628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:27.634  [2024-12-09 10:21:19.782621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:28.199   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:28.199   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@868 -- # return 0
00:06:28.199   10:21:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@545 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512
00:06:28.199   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.199   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:28.457  Malloc_QD
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # waitforbdev Malloc_QD
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_QD
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # local i
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:28.457  [
00:06:28.457  {
00:06:28.457  "name": "Malloc_QD",
00:06:28.457  "aliases": [
00:06:28.457  "554f704c-b617-11ef-9b05-d5e34e08fe3b"
00:06:28.457  ],
00:06:28.457  "product_name": "Malloc disk",
00:06:28.457  "block_size": 512,
00:06:28.457  "num_blocks": 262144,
00:06:28.457  "uuid": "554f704c-b617-11ef-9b05-d5e34e08fe3b",
00:06:28.457  "assigned_rate_limits": {
00:06:28.457  "rw_ios_per_sec": 0,
00:06:28.457  "rw_mbytes_per_sec": 0,
00:06:28.457  "r_mbytes_per_sec": 0,
00:06:28.457  "w_mbytes_per_sec": 0
00:06:28.457  },
00:06:28.457  "claimed": false,
00:06:28.457  "zoned": false,
00:06:28.457  "supported_io_types": {
00:06:28.457  "read": true,
00:06:28.457  "write": true,
00:06:28.457  "unmap": true,
00:06:28.457  "flush": true,
00:06:28.457  "reset": true,
00:06:28.457  "nvme_admin": false,
00:06:28.457  "nvme_io": false,
00:06:28.457  "nvme_io_md": false,
00:06:28.457  "write_zeroes": true,
00:06:28.457  "zcopy": true,
00:06:28.457  "get_zone_info": false,
00:06:28.457  "zone_management": false,
00:06:28.457  "zone_append": false,
00:06:28.457  "compare": false,
00:06:28.457  "compare_and_write": false,
00:06:28.457  "abort": true,
00:06:28.457  "seek_hole": false,
00:06:28.457  "seek_data": false,
00:06:28.457  "copy": true,
00:06:28.457  "nvme_iov_md": false
00:06:28.457  },
00:06:28.457  "memory_domains": [
00:06:28.457  {
00:06:28.457  "dma_device_id": "system",
00:06:28.457  "dma_device_type": 1
00:06:28.457  },
00:06:28.457  {
00:06:28.457  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:28.457  "dma_device_type": 2
00:06:28.457  }
00:06:28.457  ],
00:06:28.457  "driver_specific": {}
00:06:28.457  }
00:06:28.457  ]
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@911 -- # return 0
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # sleep 2
00:06:28.457   10:21:20 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@548 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:06:28.457  Running I/O for 5 seconds...
00:06:30.330     771840.00 IOPS,  3015.00 MiB/s
[2024-12-09T10:21:22.491Z]  10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # qd_sampling_function_test Malloc_QD
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@518 -- # local bdev_name=Malloc_QD
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local sampling_period=10
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local iostats
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@522 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:30.330     770816.00 IOPS,  3011.00 MiB/s
[2024-12-09T10:21:22.491Z]  10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:30.330    10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # rpc_cmd bdev_get_iostat -b Malloc_QD
00:06:30.330    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:30.330    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:30.330    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:30.330   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@524 -- # iostats='{
00:06:30.330  "tick_rate": 2599999259,
00:06:30.330  "ticks": 829161578916,
00:06:30.330  "bdevs": [
00:06:30.330  {
00:06:30.330  "name": "Malloc_QD",
00:06:30.330  "bytes_read": 6330290688,
00:06:30.330  "num_read_ops": 1545475,
00:06:30.330  "bytes_written": 0,
00:06:30.330  "num_write_ops": 0,
00:06:30.330  "bytes_unmapped": 0,
00:06:30.330  "num_unmap_ops": 0,
00:06:30.330  "bytes_copied": 0,
00:06:30.330  "num_copy_ops": 0,
00:06:30.330  "read_latency_ticks": 2666256595786,
00:06:30.330  "max_read_latency_ticks": 2631238,
00:06:30.330  "min_read_latency_ticks": 48472,
00:06:30.330  "write_latency_ticks": 0,
00:06:30.330  "max_write_latency_ticks": 0,
00:06:30.330  "min_write_latency_ticks": 0,
00:06:30.330  "unmap_latency_ticks": 0,
00:06:30.330  "max_unmap_latency_ticks": 0,
00:06:30.330  "min_unmap_latency_ticks": 0,
00:06:30.330  "copy_latency_ticks": 0,
00:06:30.330  "max_copy_latency_ticks": 0,
00:06:30.330  "min_copy_latency_ticks": 0,
00:06:30.330  "io_error": {},
00:06:30.330  "queue_depth_polling_period": 10,
00:06:30.330  "queue_depth": 512,
00:06:30.330  "io_time": 130,
00:06:30.330  "weighted_io_time": 71680
00:06:30.330  }
00:06:30.330  ]
00:06:30.330  }'
00:06:30.330    10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # jq -r '.bdevs[0].queue_depth_polling_period'
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@526 -- # qd_sampling_period=10
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 == null ']'
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@528 -- # '[' 10 -ne 10 ']'
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@552 -- # rpc_cmd bdev_malloc_delete Malloc_QD
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:30.588  
00:06:30.588                                                                                                  Latency(us)
00:06:30.588  
[2024-12-09T10:21:22.749Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:30.588  Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:06:30.588  	 Malloc_QD           :       2.03  380889.07    1487.85       0.00     0.00     671.15     122.09    1014.55
00:06:30.588  Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:06:30.588  	 Malloc_QD           :       2.03  389911.22    1523.09       0.00     0.00     655.63      65.77     784.54
00:06:30.588  
[2024-12-09T10:21:22.749Z]  ===================================================================================================================
00:06:30.588  
[2024-12-09T10:21:22.749Z]  Total                       :             770800.29    3010.94       0.00     0.00     663.30      65.77    1014.55
00:06:30.588   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:30.588  {
00:06:30.588    "results": [
00:06:30.588      {
00:06:30.588        "job": "Malloc_QD",
00:06:30.588        "core_mask": "0x1",
00:06:30.588        "workload": "randread",
00:06:30.588        "status": "finished",
00:06:30.588        "queue_depth": 256,
00:06:30.588        "io_size": 4096,
00:06:30.588        "runtime": 2.03381,
00:06:30.588        "iops": 380889.0702671341,
00:06:30.588        "mibps": 1487.8479307309926,
00:06:30.588        "io_failed": 0,
00:06:30.588        "io_timeout": 0,
00:06:30.588        "avg_latency_us": 671.1469089544327,
00:06:30.588        "min_latency_us": 122.0923424886253,
00:06:30.589        "max_latency_us": 1014.547981453867
00:06:30.589      },
00:06:30.589      {
00:06:30.589        "job": "Malloc_QD",
00:06:30.589        "core_mask": "0x2",
00:06:30.589        "workload": "randread",
00:06:30.589        "status": "finished",
00:06:30.589        "queue_depth": 256,
00:06:30.589        "io_size": 4096,
00:06:30.589        "runtime": 2.034022,
00:06:30.589        "iops": 389911.2202326229,
00:06:30.589        "mibps": 1523.090704033683,
00:06:30.589        "io_failed": 0,
00:06:30.589        "io_timeout": 0,
00:06:30.589        "avg_latency_us": 655.6332111391221,
00:06:30.589        "min_latency_us": 65.77232643742073,
00:06:30.589        "max_latency_us": 784.5417620559407
00:06:30.589      }
00:06:30.589    ],
00:06:30.589    "core_count": 2
00:06:30.589  }
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # killprocess 49610
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@954 -- # '[' -z 49610 ']'
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # kill -0 49610
00:06:30.589    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # uname
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:06:30.589    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@962 -- # ps -c -o command 49610
00:06:30.589    10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@962 -- # tail -1
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@962 -- # process_name=bdevperf
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@964 -- # '[' bdevperf = sudo ']'
00:06:30.589  killing process with pid 49610
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49610'
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@973 -- # kill 49610
00:06:30.589  Received shutdown signal, test time was about 2.054919 seconds
00:06:30.589  
00:06:30.589                                                                                                  Latency(us)
00:06:30.589  
[2024-12-09T10:21:22.750Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:30.589  
[2024-12-09T10:21:22.750Z]  ===================================================================================================================
00:06:30.589  
[2024-12-09T10:21:22.750Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@978 -- # wait 49610
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # trap - SIGINT SIGTERM EXIT
00:06:30.589  
00:06:30.589  real	0m3.150s
00:06:30.589  user	0m5.997s
00:06:30.589  sys	0m0.421s
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:30.589   10:21:22 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x
00:06:30.589  ************************************
00:06:30.589  END TEST bdev_qd_sampling
00:06:30.589  ************************************
00:06:30.589   10:21:22 blockdev_general -- bdev/blockdev.sh@827 -- # run_test bdev_error error_test_suite ''
00:06:30.589   10:21:22 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:30.589   10:21:22 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:30.589   10:21:22 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:06:30.589  ************************************
00:06:30.589  START TEST bdev_error
00:06:30.589  ************************************
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@1129 -- # error_test_suite ''
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@465 -- # DEV_1=Dev_1
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_2=Dev_2
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # ERR_DEV=EE_Dev_1
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # ERR_PID=49653
00:06:30.589  Process error testing pid: 49653
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # echo 'Process error testing pid: 49653'
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # waitforlisten 49653
00:06:30.589   10:21:22 blockdev_general.bdev_error -- bdev/blockdev.sh@470 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f ''
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 49653 ']'
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:30.589  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:30.589   10:21:22 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:30.589  [2024-12-09 10:21:22.627433] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:30.589  [2024-12-09 10:21:22.627581] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:30.847  EAL: TSC is not safe to use in SMP mode
00:06:30.847  EAL: TSC is not invariant
00:06:30.847  [2024-12-09 10:21:22.927019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:30.847  [2024-12-09 10:21:22.951236] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:06:30.847  [2024-12-09 10:21:22.951275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@475 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412  Dev_1
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # waitforbdev Dev_1
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412  [
00:06:31.412  {
00:06:31.412  "name": "Dev_1",
00:06:31.412  "aliases": [
00:06:31.412  "572f9816-b617-11ef-9b05-d5e34e08fe3b"
00:06:31.412  ],
00:06:31.412  "product_name": "Malloc disk",
00:06:31.412  "block_size": 512,
00:06:31.412  "num_blocks": 262144,
00:06:31.412  "uuid": "572f9816-b617-11ef-9b05-d5e34e08fe3b",
00:06:31.412  "assigned_rate_limits": {
00:06:31.412  "rw_ios_per_sec": 0,
00:06:31.412  "rw_mbytes_per_sec": 0,
00:06:31.412  "r_mbytes_per_sec": 0,
00:06:31.412  "w_mbytes_per_sec": 0
00:06:31.412  },
00:06:31.412  "claimed": false,
00:06:31.412  "zoned": false,
00:06:31.412  "supported_io_types": {
00:06:31.412  "read": true,
00:06:31.412  "write": true,
00:06:31.412  "unmap": true,
00:06:31.412  "flush": true,
00:06:31.412  "reset": true,
00:06:31.412  "nvme_admin": false,
00:06:31.412  "nvme_io": false,
00:06:31.412  "nvme_io_md": false,
00:06:31.412  "write_zeroes": true,
00:06:31.412  "zcopy": true,
00:06:31.412  "get_zone_info": false,
00:06:31.412  "zone_management": false,
00:06:31.412  "zone_append": false,
00:06:31.412  "compare": false,
00:06:31.412  "compare_and_write": false,
00:06:31.412  "abort": true,
00:06:31.412  "seek_hole": false,
00:06:31.412  "seek_data": false,
00:06:31.412  "copy": true,
00:06:31.412  "nvme_iov_md": false
00:06:31.412  },
00:06:31.412  "memory_domains": [
00:06:31.412  {
00:06:31.412  "dma_device_id": "system",
00:06:31.412  "dma_device_type": 1
00:06:31.412  },
00:06:31.412  {
00:06:31.412  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:31.412  "dma_device_type": 2
00:06:31.412  }
00:06:31.412  ],
00:06:31.412  "driver_specific": {}
00:06:31.412  }
00:06:31.412  ]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_error_create Dev_1
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412  true
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412  Dev_2
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # waitforbdev Dev_2
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.412  [
00:06:31.412  {
00:06:31.412  "name": "Dev_2",
00:06:31.412  "aliases": [
00:06:31.412  "57351623-b617-11ef-9b05-d5e34e08fe3b"
00:06:31.412  ],
00:06:31.412  "product_name": "Malloc disk",
00:06:31.412  "block_size": 512,
00:06:31.412  "num_blocks": 262144,
00:06:31.412  "uuid": "57351623-b617-11ef-9b05-d5e34e08fe3b",
00:06:31.412  "assigned_rate_limits": {
00:06:31.412  "rw_ios_per_sec": 0,
00:06:31.412  "rw_mbytes_per_sec": 0,
00:06:31.412  "r_mbytes_per_sec": 0,
00:06:31.412  "w_mbytes_per_sec": 0
00:06:31.412  },
00:06:31.412  "claimed": false,
00:06:31.412  "zoned": false,
00:06:31.412  "supported_io_types": {
00:06:31.412  "read": true,
00:06:31.412  "write": true,
00:06:31.412  "unmap": true,
00:06:31.412  "flush": true,
00:06:31.412  "reset": true,
00:06:31.412  "nvme_admin": false,
00:06:31.412  "nvme_io": false,
00:06:31.412  "nvme_io_md": false,
00:06:31.412  "write_zeroes": true,
00:06:31.412  "zcopy": true,
00:06:31.412  "get_zone_info": false,
00:06:31.412  "zone_management": false,
00:06:31.412  "zone_append": false,
00:06:31.412  "compare": false,
00:06:31.412  "compare_and_write": false,
00:06:31.412  "abort": true,
00:06:31.412  "seek_hole": false,
00:06:31.412  "seek_data": false,
00:06:31.412  "copy": true,
00:06:31.412  "nvme_iov_md": false
00:06:31.412  },
00:06:31.412  "memory_domains": [
00:06:31.412  {
00:06:31.412  "dma_device_id": "system",
00:06:31.412  "dma_device_type": 1
00:06:31.412  },
00:06:31.412  {
00:06:31.412  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:31.412  "dma_device_type": 2
00:06:31.412  }
00:06:31.412  ],
00:06:31.412  "driver_specific": {}
00:06:31.412  }
00:06:31.412  ]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:06:31.412   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:06:31.412   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:31.413   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:31.413   10:21:23 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:31.670   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # sleep 1
00:06:31.670   10:21:23 blockdev_general.bdev_error -- bdev/blockdev.sh@482 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:06:31.670  Running I/O for 5 seconds...
00:06:32.601   10:21:24 blockdev_general.bdev_error -- bdev/blockdev.sh@486 -- # kill -0 49653
00:06:32.601  Process is existed as continue on error is set. Pid: 49653
00:06:32.601   10:21:24 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # echo 'Process is existed as continue on error is set. Pid: 49653'
00:06:32.601   10:21:24 blockdev_general.bdev_error -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_error_delete EE_Dev_1
00:06:32.601   10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:32.601   10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:32.601   10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:32.601   10:21:24 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_malloc_delete Dev_1
00:06:32.601   10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:32.601   10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:32.601     868811.00 IOPS,  3393.79 MiB/s
[2024-12-09T10:21:24.762Z]  10:21:24 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:32.601   10:21:24 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # sleep 5
00:06:32.601  Timeout while waiting for response:
00:06:32.601  
00:06:32.601  
00:06:34.913     978965.50 IOPS,  3824.08 MiB/s
[2024-12-09T10:21:28.008Z]   1016867.67 IOPS,  3972.14 MiB/s
[2024-12-09T10:21:28.942Z]   1018886.75 IOPS,  3980.03 MiB/s
00:06:36.781                                                                                                  Latency(us)
00:06:36.781  
[2024-12-09T10:21:28.942Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:36.781  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:06:36.781  	 EE_Dev_1            :       0.98  437279.73    1708.12       5.08     0.00      36.42      17.62      90.58
00:06:36.781  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:06:36.781  	 Dev_2               :       5.00  948686.00    3705.80       0.00     0.00      16.69       4.85    9779.99
00:06:36.781  
[2024-12-09T10:21:28.943Z]  ===================================================================================================================
00:06:36.782  
[2024-12-09T10:21:28.943Z]  Total                       :            1385965.73    5413.93       5.08     0.00      18.33       4.85    9779.99
00:06:37.715   10:21:29 blockdev_general.bdev_error -- bdev/blockdev.sh@498 -- # killprocess 49653
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@954 -- # '[' -z 49653 ']'
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # kill -0 49653
00:06:37.715    10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # uname
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:06:37.715    10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@962 -- # tail -1
00:06:37.715    10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@962 -- # ps -c -o command 49653
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@962 -- # process_name=bdevperf
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@964 -- # '[' bdevperf = sudo ']'
00:06:37.715  killing process with pid 49653
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49653'
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@973 -- # kill 49653
00:06:37.715  Received shutdown signal, test time was about 5.000000 seconds
00:06:37.715  
00:06:37.715                                                                                                  Latency(us)
00:06:37.715  
[2024-12-09T10:21:29.876Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:37.715  
[2024-12-09T10:21:29.876Z]  ===================================================================================================================
00:06:37.715  
[2024-12-09T10:21:29.876Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@978 -- # wait 49653
00:06:37.715   10:21:29 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # ERR_PID=49693
00:06:37.715   10:21:29 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # echo 'Process error testing pid: 49693'
00:06:37.715  Process error testing pid: 49693
00:06:37.715   10:21:29 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # waitforlisten 49693
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@835 -- # '[' -z 49693 ']'
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:37.715   10:21:29 blockdev_general.bdev_error -- bdev/blockdev.sh@501 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 ''
00:06:37.715  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:37.715   10:21:29 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:37.715  [2024-12-09 10:21:29.862798] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:37.715  [2024-12-09 10:21:29.862964] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:38.281  EAL: TSC is not safe to use in SMP mode
00:06:38.281  EAL: TSC is not invariant
00:06:38.281  [2024-12-09 10:21:30.177466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:38.281  [2024-12-09 10:21:30.200948] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:06:38.281  [2024-12-09 10:21:30.200987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@868 -- # return 0
00:06:38.847   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@506 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.847  Dev_1
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.847   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # waitforbdev Dev_1
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_1
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.847   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.847  [
00:06:38.847  {
00:06:38.847  "name": "Dev_1",
00:06:38.847  "aliases": [
00:06:38.847  "5b861f34-b617-11ef-9b05-d5e34e08fe3b"
00:06:38.847  ],
00:06:38.847  "product_name": "Malloc disk",
00:06:38.847  "block_size": 512,
00:06:38.847  "num_blocks": 262144,
00:06:38.847  "uuid": "5b861f34-b617-11ef-9b05-d5e34e08fe3b",
00:06:38.847  "assigned_rate_limits": {
00:06:38.847  "rw_ios_per_sec": 0,
00:06:38.847  "rw_mbytes_per_sec": 0,
00:06:38.847  "r_mbytes_per_sec": 0,
00:06:38.847  "w_mbytes_per_sec": 0
00:06:38.847  },
00:06:38.847  "claimed": false,
00:06:38.847  "zoned": false,
00:06:38.847  "supported_io_types": {
00:06:38.847  "read": true,
00:06:38.847  "write": true,
00:06:38.847  "unmap": true,
00:06:38.847  "flush": true,
00:06:38.847  "reset": true,
00:06:38.847  "nvme_admin": false,
00:06:38.847  "nvme_io": false,
00:06:38.847  "nvme_io_md": false,
00:06:38.847  "write_zeroes": true,
00:06:38.847  "zcopy": true,
00:06:38.847  "get_zone_info": false,
00:06:38.847  "zone_management": false,
00:06:38.847  "zone_append": false,
00:06:38.847  "compare": false,
00:06:38.847  "compare_and_write": false,
00:06:38.847  "abort": true,
00:06:38.847  "seek_hole": false,
00:06:38.848  "seek_data": false,
00:06:38.848  "copy": true,
00:06:38.848  "nvme_iov_md": false
00:06:38.848  },
00:06:38.848  "memory_domains": [
00:06:38.848  {
00:06:38.848  "dma_device_id": "system",
00:06:38.848  "dma_device_type": 1
00:06:38.848  },
00:06:38.848  {
00:06:38.848  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:38.848  "dma_device_type": 2
00:06:38.848  }
00:06:38.848  ],
00:06:38.848  "driver_specific": {}
00:06:38.848  }
00:06:38.848  ]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_error_create Dev_1
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.848  true
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.848  Dev_2
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # waitforbdev Dev_2
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@903 -- # local bdev_name=Dev_2
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # local i
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.848  [
00:06:38.848  {
00:06:38.848  "name": "Dev_2",
00:06:38.848  "aliases": [
00:06:38.848  "5b8b9d52-b617-11ef-9b05-d5e34e08fe3b"
00:06:38.848  ],
00:06:38.848  "product_name": "Malloc disk",
00:06:38.848  "block_size": 512,
00:06:38.848  "num_blocks": 262144,
00:06:38.848  "uuid": "5b8b9d52-b617-11ef-9b05-d5e34e08fe3b",
00:06:38.848  "assigned_rate_limits": {
00:06:38.848  "rw_ios_per_sec": 0,
00:06:38.848  "rw_mbytes_per_sec": 0,
00:06:38.848  "r_mbytes_per_sec": 0,
00:06:38.848  "w_mbytes_per_sec": 0
00:06:38.848  },
00:06:38.848  "claimed": false,
00:06:38.848  "zoned": false,
00:06:38.848  "supported_io_types": {
00:06:38.848  "read": true,
00:06:38.848  "write": true,
00:06:38.848  "unmap": true,
00:06:38.848  "flush": true,
00:06:38.848  "reset": true,
00:06:38.848  "nvme_admin": false,
00:06:38.848  "nvme_io": false,
00:06:38.848  "nvme_io_md": false,
00:06:38.848  "write_zeroes": true,
00:06:38.848  "zcopy": true,
00:06:38.848  "get_zone_info": false,
00:06:38.848  "zone_management": false,
00:06:38.848  "zone_append": false,
00:06:38.848  "compare": false,
00:06:38.848  "compare_and_write": false,
00:06:38.848  "abort": true,
00:06:38.848  "seek_hole": false,
00:06:38.848  "seek_data": false,
00:06:38.848  "copy": true,
00:06:38.848  "nvme_iov_md": false
00:06:38.848  },
00:06:38.848  "memory_domains": [
00:06:38.848  {
00:06:38.848  "dma_device_id": "system",
00:06:38.848  "dma_device_type": 1
00:06:38.848  },
00:06:38.848  {
00:06:38.848  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:38.848  "dma_device_type": 2
00:06:38.848  }
00:06:38.848  ],
00:06:38.848  "driver_specific": {}
00:06:38.848  }
00:06:38.848  ]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@911 -- # return 0
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # NOT wait 49693
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@652 -- # local es=0
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@654 -- # valid_exec_arg wait 49693
00:06:38.848   10:21:30 blockdev_general.bdev_error -- bdev/blockdev.sh@513 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # local arg=wait
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:38.848    10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # type -t wait
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in
00:06:38.848   10:21:30 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # wait 49693
00:06:38.848  Running I/O for 5 seconds...
00:06:38.848  task offset: 39056 on job bdev=EE_Dev_1 fails
00:06:38.848  
00:06:38.848                                                                                                  Latency(us)
00:06:38.848  
[2024-12-09T10:21:31.009Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:38.848  Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:06:38.848  Job: EE_Dev_1 ended in about 0.00 seconds with error
00:06:38.848  	 EE_Dev_1            :       0.00  252873.56     987.79   57471.26     0.00      41.57      17.23      79.16
00:06:38.848  Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096)
00:06:38.848  	 Dev_2               :       0.00  293577.98    1146.79       0.00     0.00      26.40      16.25      40.96
00:06:38.848  
[2024-12-09T10:21:31.009Z]  ===================================================================================================================
00:06:38.848  
[2024-12-09T10:21:31.009Z]  Total                       :             546451.54    2134.58   57471.26     0.00      33.34      16.25      79.16
00:06:38.848  [2024-12-09 10:21:30.928640] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:06:38.848  request:
00:06:38.848  {
00:06:38.848    "method": "perform_tests",
00:06:38.848    "req_id": 1
00:06:38.848  }
00:06:38.848  Got JSON-RPC error response
00:06:38.848  response:
00:06:38.848  {
00:06:38.848    "code": -32603,
00:06:38.848    "message": "bdevperf failed with error Operation not permitted"
00:06:38.848  }
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@655 -- # es=255
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@663 -- # (( es > 128 ))
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@664 -- # es=127
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@665 -- # case "$es" in
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@672 -- # es=1
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@679 -- # (( !es == 0 ))
00:06:39.106  
00:06:39.106  real	0m8.396s
00:06:39.106  user	0m8.633s
00:06:39.106  sys	0m0.725s
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:39.106   10:21:31 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x
00:06:39.106  ************************************
00:06:39.106  END TEST bdev_error
00:06:39.106  ************************************
00:06:39.106   10:21:31 blockdev_general -- bdev/blockdev.sh@828 -- # run_test bdev_stat stat_test_suite ''
00:06:39.106   10:21:31 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:39.106   10:21:31 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:39.106   10:21:31 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:06:39.106  ************************************
00:06:39.106  START TEST bdev_stat
00:06:39.106  ************************************
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@1129 -- # stat_test_suite ''
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@591 -- # STAT_DEV=Malloc_STAT
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # STAT_PID=49720
00:06:39.106  Process Bdev IO statistics testing pid: 49720
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # echo 'Process Bdev IO statistics testing pid: 49720'
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT
00:06:39.106   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # waitforlisten 49720
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@835 -- # '[' -z 49720 ']'
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@594 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C ''
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:39.107  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:39.107   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:39.107  [2024-12-09 10:21:31.060047] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:39.107  [2024-12-09 10:21:31.060158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:39.364  EAL: TSC is not safe to use in SMP mode
00:06:39.364  EAL: TSC is not invariant
00:06:39.364  [2024-12-09 10:21:31.366727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:06:39.364  [2024-12-09 10:21:31.395626] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:39.364  [2024-12-09 10:21:31.395667] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:06:39.364  [2024-12-09 10:21:31.396017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:39.364  [2024-12-09 10:21:31.395890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@868 -- # return 0
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@600 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:39.929  Malloc_STAT
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # waitforbdev Malloc_STAT
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_STAT
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # local i
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:39.929   10:21:31 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:39.929  [
00:06:39.929  {
00:06:39.929  "name": "Malloc_STAT",
00:06:39.929  "aliases": [
00:06:39.929  "5c3c1e24-b617-11ef-9b05-d5e34e08fe3b"
00:06:39.929  ],
00:06:39.929  "product_name": "Malloc disk",
00:06:39.929  "block_size": 512,
00:06:39.929  "num_blocks": 262144,
00:06:39.929  "uuid": "5c3c1e24-b617-11ef-9b05-d5e34e08fe3b",
00:06:39.929  "assigned_rate_limits": {
00:06:39.929  "rw_ios_per_sec": 0,
00:06:39.929  "rw_mbytes_per_sec": 0,
00:06:39.929  "r_mbytes_per_sec": 0,
00:06:39.929  "w_mbytes_per_sec": 0
00:06:39.929  },
00:06:39.929  "claimed": false,
00:06:39.929  "zoned": false,
00:06:39.929  "supported_io_types": {
00:06:39.929  "read": true,
00:06:39.929  "write": true,
00:06:39.929  "unmap": true,
00:06:39.929  "flush": true,
00:06:39.929  "reset": true,
00:06:39.929  "nvme_admin": false,
00:06:39.929  "nvme_io": false,
00:06:39.929  "nvme_io_md": false,
00:06:39.929  "write_zeroes": true,
00:06:39.929  "zcopy": true,
00:06:39.929  "get_zone_info": false,
00:06:39.929  "zone_management": false,
00:06:39.929  "zone_append": false,
00:06:39.929  "compare": false,
00:06:39.929  "compare_and_write": false,
00:06:39.929  "abort": true,
00:06:39.929  "seek_hole": false,
00:06:39.929  "seek_data": false,
00:06:39.929  "copy": true,
00:06:39.929  "nvme_iov_md": false
00:06:39.929  },
00:06:39.929  "memory_domains": [
00:06:39.929  {
00:06:39.929  "dma_device_id": "system",
00:06:39.929  "dma_device_type": 1
00:06:39.929  },
00:06:39.929  {
00:06:39.929  "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",
00:06:39.929  "dma_device_type": 2
00:06:39.929  }
00:06:39.929  ],
00:06:39.929  "driver_specific": {}
00:06:39.929  }
00:06:39.929  ]
00:06:39.929   10:21:32 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:39.929   10:21:32 blockdev_general.bdev_stat -- common/autotest_common.sh@911 -- # return 0
00:06:39.929   10:21:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # sleep 2
00:06:39.929   10:21:32 blockdev_general.bdev_stat -- bdev/blockdev.sh@603 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:06:39.929  Running I/O for 10 seconds...
00:06:42.238     711936.00 IOPS,  2781.00 MiB/s
[2024-12-09T10:21:34.399Z]    733696.00 IOPS,  2866.00 MiB/s
[2024-12-09T10:21:34.399Z]  10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # stat_function_test Malloc_STAT
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@558 -- # local bdev_name=Malloc_STAT
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local iostats
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local io_count1
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count2
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local iostats_per_channel
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local io_count_per_channel1
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel2
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel_all=0
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@567 -- # iostats='{
00:06:42.238  "tick_rate": 2599999259,
00:06:42.238  "ticks": 859523173636,
00:06:42.238  "bdevs": [
00:06:42.238  {
00:06:42.238  "name": "Malloc_STAT",
00:06:42.238  "bytes_read": 6203412992,
00:06:42.238  "num_read_ops": 1514499,
00:06:42.238  "bytes_written": 0,
00:06:42.238  "num_write_ops": 0,
00:06:42.238  "bytes_unmapped": 0,
00:06:42.238  "num_unmap_ops": 0,
00:06:42.238  "bytes_copied": 0,
00:06:42.238  "num_copy_ops": 0,
00:06:42.238  "read_latency_ticks": 2741426445110,
00:06:42.238  "max_read_latency_ticks": 2616176,
00:06:42.238  "min_read_latency_ticks": 49310,
00:06:42.238  "write_latency_ticks": 0,
00:06:42.238  "max_write_latency_ticks": 0,
00:06:42.238  "min_write_latency_ticks": 0,
00:06:42.238  "unmap_latency_ticks": 0,
00:06:42.238  "max_unmap_latency_ticks": 0,
00:06:42.238  "min_unmap_latency_ticks": 0,
00:06:42.238  "copy_latency_ticks": 0,
00:06:42.238  "max_copy_latency_ticks": 0,
00:06:42.238  "min_copy_latency_ticks": 0,
00:06:42.238  "io_error": {}
00:06:42.238  }
00:06:42.238  ]
00:06:42.238  }'
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # jq -r '.bdevs[0].num_read_ops'
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # io_count1=1514499
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:42.238    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:42.238   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@570 -- # iostats_per_channel='{
00:06:42.238  "tick_rate": 2599999259,
00:06:42.238  "ticks": 859577725700,
00:06:42.238  "name": "Malloc_STAT",
00:06:42.238  "channels": [
00:06:42.238  {
00:06:42.238  "thread_id": 2,
00:06:42.238  "bytes_read": 3128950784,
00:06:42.238  "num_read_ops": 763904,
00:06:42.238  "bytes_written": 0,
00:06:42.238  "num_write_ops": 0,
00:06:42.238  "bytes_unmapped": 0,
00:06:42.238  "num_unmap_ops": 0,
00:06:42.238  "bytes_copied": 0,
00:06:42.239  "num_copy_ops": 0,
00:06:42.239  "read_latency_ticks": 1384473776732,
00:06:42.239  "max_read_latency_ticks": 2616176,
00:06:42.239  "min_read_latency_ticks": 1142366,
00:06:42.239  "write_latency_ticks": 0,
00:06:42.239  "max_write_latency_ticks": 0,
00:06:42.239  "min_write_latency_ticks": 0,
00:06:42.239  "unmap_latency_ticks": 0,
00:06:42.239  "max_unmap_latency_ticks": 0,
00:06:42.239  "min_unmap_latency_ticks": 0,
00:06:42.239  "copy_latency_ticks": 0,
00:06:42.239  "max_copy_latency_ticks": 0,
00:06:42.239  "min_copy_latency_ticks": 0
00:06:42.239  },
00:06:42.239  {
00:06:42.239  "thread_id": 3,
00:06:42.239  "bytes_read": 3139436544,
00:06:42.239  "num_read_ops": 766464,
00:06:42.239  "bytes_written": 0,
00:06:42.239  "num_write_ops": 0,
00:06:42.239  "bytes_unmapped": 0,
00:06:42.239  "num_unmap_ops": 0,
00:06:42.239  "bytes_copied": 0,
00:06:42.239  "num_copy_ops": 0,
00:06:42.239  "read_latency_ticks": 1384818882240,
00:06:42.239  "max_read_latency_ticks": 2186120,
00:06:42.239  "min_read_latency_ticks": 1196414,
00:06:42.239  "write_latency_ticks": 0,
00:06:42.239  "max_write_latency_ticks": 0,
00:06:42.239  "min_write_latency_ticks": 0,
00:06:42.239  "unmap_latency_ticks": 0,
00:06:42.239  "max_unmap_latency_ticks": 0,
00:06:42.239  "min_unmap_latency_ticks": 0,
00:06:42.239  "copy_latency_ticks": 0,
00:06:42.239  "max_copy_latency_ticks": 0,
00:06:42.239  "min_copy_latency_ticks": 0
00:06:42.239  }
00:06:42.239  ]
00:06:42.239  }'
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # jq -r '.channels[0].num_read_ops'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # io_count_per_channel1=763904
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel_all=763904
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # jq -r '.channels[1].num_read_ops'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel2=766464
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel_all=1530368
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@576 -- # iostats='{
00:06:42.239  "tick_rate": 2599999259,
00:06:42.239  "ticks": 859651841276,
00:06:42.239  "bdevs": [
00:06:42.239  {
00:06:42.239  "name": "Malloc_STAT",
00:06:42.239  "bytes_read": 6356505088,
00:06:42.239  "num_read_ops": 1551875,
00:06:42.239  "bytes_written": 0,
00:06:42.239  "num_write_ops": 0,
00:06:42.239  "bytes_unmapped": 0,
00:06:42.239  "num_unmap_ops": 0,
00:06:42.239  "bytes_copied": 0,
00:06:42.239  "num_copy_ops": 0,
00:06:42.239  "read_latency_ticks": 2807195970418,
00:06:42.239  "max_read_latency_ticks": 2616176,
00:06:42.239  "min_read_latency_ticks": 49310,
00:06:42.239  "write_latency_ticks": 0,
00:06:42.239  "max_write_latency_ticks": 0,
00:06:42.239  "min_write_latency_ticks": 0,
00:06:42.239  "unmap_latency_ticks": 0,
00:06:42.239  "max_unmap_latency_ticks": 0,
00:06:42.239  "min_unmap_latency_ticks": 0,
00:06:42.239  "copy_latency_ticks": 0,
00:06:42.239  "max_copy_latency_ticks": 0,
00:06:42.239  "min_copy_latency_ticks": 0,
00:06:42.239  "io_error": {}
00:06:42.239  }
00:06:42.239  ]
00:06:42.239  }'
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # jq -r '.bdevs[0].num_read_ops'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # io_count2=1551875
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 1530368 -lt 1514499 ']'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@582 -- # '[' 1530368 -gt 1551875 ']'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@607 -- # rpc_cmd bdev_malloc_delete Malloc_STAT
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:42.239  
00:06:42.239                                                                                                  Latency(us)
00:06:42.239  
[2024-12-09T10:21:34.400Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:42.239  Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096)
00:06:42.239  	 Malloc_STAT         :       2.13  367011.04    1433.64       0.00     0.00     696.54      97.67    1008.25
00:06:42.239  Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096)
00:06:42.239  	 Malloc_STAT         :       2.13  368178.60    1438.20       0.00     0.00     694.31      61.83     844.41
00:06:42.239  
[2024-12-09T10:21:34.400Z]  ===================================================================================================================
00:06:42.239  
[2024-12-09T10:21:34.400Z]  Total                       :             735189.64    2871.83       0.00     0.00     695.42      61.83    1008.25
00:06:42.239  {
00:06:42.239    "results": [
00:06:42.239      {
00:06:42.239        "job": "Malloc_STAT",
00:06:42.239        "core_mask": "0x1",
00:06:42.239        "workload": "randread",
00:06:42.239        "status": "finished",
00:06:42.239        "queue_depth": 256,
00:06:42.239        "io_size": 4096,
00:06:42.239        "runtime": 2.129549,
00:06:42.239        "iops": 367011.0431833219,
00:06:42.239        "mibps": 1433.6368874348511,
00:06:42.239        "io_failed": 0,
00:06:42.239        "io_timeout": 0,
00:06:42.239        "avg_latency_us": 696.5369500058878,
00:06:42.239        "min_latency_us": 97.67387399090025,
00:06:42.239        "max_latency_us": 1008.2464411963896
00:06:42.239      },
00:06:42.239      {
00:06:42.239        "job": "Malloc_STAT",
00:06:42.239        "core_mask": "0x2",
00:06:42.239        "workload": "randread",
00:06:42.239        "status": "finished",
00:06:42.239        "queue_depth": 256,
00:06:42.239        "io_size": 4096,
00:06:42.239        "runtime": 2.129749,
00:06:42.239        "iops": 368178.59757182654,
00:06:42.239        "mibps": 1438.1976467649474,
00:06:42.239        "io_failed": 0,
00:06:42.239        "io_timeout": 0,
00:06:42.239        "avg_latency_us": 694.3128345601393,
00:06:42.239        "min_latency_us": 61.83386377649733,
00:06:42.239        "max_latency_us": 844.4063945019763
00:06:42.239      }
00:06:42.239    ],
00:06:42.239    "core_count": 2
00:06:42.239  }
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # killprocess 49720
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@954 -- # '[' -z 49720 ']'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # kill -0 49720
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # uname
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@962 -- # ps -c -o command 49720
00:06:42.239    10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@962 -- # tail -1
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@962 -- # process_name=bdevperf
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@964 -- # '[' bdevperf = sudo ']'
00:06:42.239  killing process with pid 49720
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49720'
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@973 -- # kill 49720
00:06:42.239  Received shutdown signal, test time was about 2.148012 seconds
00:06:42.239  
00:06:42.239                                                                                                  Latency(us)
00:06:42.239  
[2024-12-09T10:21:34.400Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:42.239  
[2024-12-09T10:21:34.400Z]  ===================================================================================================================
00:06:42.239  
[2024-12-09T10:21:34.400Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@978 -- # wait 49720
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # trap - SIGINT SIGTERM EXIT
00:06:42.239  
00:06:42.239  real	0m3.252s
00:06:42.239  user	0m6.257s
00:06:42.239  sys	0m0.394s
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:42.239   10:21:34 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x
00:06:42.239  ************************************
00:06:42.239  END TEST bdev_stat
00:06:42.239  ************************************
00:06:42.239   10:21:34 blockdev_general -- bdev/blockdev.sh@829 -- # run_test bdev_dif_insert_strip dif_insert_strip_test_suite ''
00:06:42.239   10:21:34 blockdev_general -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:06:42.239   10:21:34 blockdev_general -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:42.239   10:21:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:06:42.239  ************************************
00:06:42.239  START TEST bdev_dif_insert_strip
00:06:42.239  ************************************
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@1129 -- # dif_insert_strip_test_suite ''
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@615 -- # DIF_DEV_1=Malloc_DIF_1
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@616 -- # DIF_DEV_2=Malloc_DIF_2
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@617 -- # DIF_DEV_3=Malloc_DIF_3
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@620 -- # DIF_PID=49771
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@621 -- # echo 'Process bdev DIF insert/strip testing pid: 49771'
00:06:42.239  Process bdev DIF insert/strip testing pid: 49771
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@622 -- # trap 'cleanup; killprocess $DIF_PID; exit 1' SIGINT SIGTERM EXIT
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@623 -- # waitforlisten 49771
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@835 -- # '[' -z 49771 ']'
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:06:42.239   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@840 -- # local max_retries=100
00:06:42.240  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:06:42.240   10:21:34 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@619 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0xf -q 32 -o 4096 -w randrw -M 50 -t 5 -C -N ''
00:06:42.240   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:06:42.240   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@844 -- # xtrace_disable
00:06:42.240   10:21:34 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:42.240  [2024-12-09 10:21:34.348473] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:42.240  [2024-12-09 10:21:34.348611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:42.498  EAL: TSC is not safe to use in SMP mode
00:06:42.498  EAL: TSC is not invariant
00:06:42.498  [2024-12-09 10:21:34.651417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:06:42.756  [2024-12-09 10:21:34.678283] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:42.756  [2024-12-09 10:21:34.678313] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:06:42.756  [2024-12-09 10:21:34.678319] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2].
00:06:42.756  [2024-12-09 10:21:34.678324] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3].
00:06:42.756  [2024-12-09 10:21:34.678536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:06:42.756  [2024-12-09 10:21:34.679204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:42.756  [2024-12-09 10:21:34.679116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:06:42.756  [2024-12-09 10:21:34.679201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:06:43.320   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@868 -- # return 0
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_1 1 512 -m 8 -t 1 -f 0 -i
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  Malloc_DIF_1
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@626 -- # waitforbdev Malloc_DIF_1
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_1
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_1 -t 2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  [
00:06:43.321  {
00:06:43.321  "name": "Malloc_DIF_1",
00:06:43.321  "aliases": [
00:06:43.321  "5e286efb-b617-11ef-9b05-d5e34e08fe3b"
00:06:43.321  ],
00:06:43.321  "product_name": "Malloc disk",
00:06:43.321  "block_size": 520,
00:06:43.321  "num_blocks": 2048,
00:06:43.321  "uuid": "5e286efb-b617-11ef-9b05-d5e34e08fe3b",
00:06:43.321  "md_size": 8,
00:06:43.321  "md_interleave": true,
00:06:43.321  "dif_type": 1,
00:06:43.321  "dif_is_head_of_md": false,
00:06:43.321  "enabled_dif_check_types": {
00:06:43.321  "reftag": true,
00:06:43.321  "apptag": false,
00:06:43.321  "guard": true
00:06:43.321  },
00:06:43.321  "dif_pi_format": 0,
00:06:43.321  "assigned_rate_limits": {
00:06:43.321  "rw_ios_per_sec": 0,
00:06:43.321  "rw_mbytes_per_sec": 0,
00:06:43.321  "r_mbytes_per_sec": 0,
00:06:43.321  "w_mbytes_per_sec": 0
00:06:43.321  },
00:06:43.321  "claimed": false,
00:06:43.321  "zoned": false,
00:06:43.321  "supported_io_types": {
00:06:43.321  "read": true,
00:06:43.321  "write": true,
00:06:43.321  "unmap": true,
00:06:43.321  "flush": true,
00:06:43.321  "reset": true,
00:06:43.321  "nvme_admin": false,
00:06:43.321  "nvme_io": false,
00:06:43.321  "nvme_io_md": false,
00:06:43.321  "write_zeroes": true,
00:06:43.321  "zcopy": true,
00:06:43.321  "get_zone_info": false,
00:06:43.321  "zone_management": false,
00:06:43.321  "zone_append": false,
00:06:43.321  "compare": false,
00:06:43.321  "compare_and_write": false,
00:06:43.321  "abort": true,
00:06:43.321  "seek_hole": false,
00:06:43.321  "seek_data": false,
00:06:43.321  "copy": true,
00:06:43.321  "nvme_iov_md": false
00:06:43.321  },
00:06:43.321  "driver_specific": {}
00:06:43.321  }
00:06:43.321  ]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@627 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_2 1 512 -m 16 -t 1 -f 0 -i
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  Malloc_DIF_2
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@628 -- # waitforbdev Malloc_DIF_2
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_2
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_2 -t 2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  [
00:06:43.321  {
00:06:43.321  "name": "Malloc_DIF_2",
00:06:43.321  "aliases": [
00:06:43.321  "5e2cb406-b617-11ef-9b05-d5e34e08fe3b"
00:06:43.321  ],
00:06:43.321  "product_name": "Malloc disk",
00:06:43.321  "block_size": 528,
00:06:43.321  "num_blocks": 2048,
00:06:43.321  "uuid": "5e2cb406-b617-11ef-9b05-d5e34e08fe3b",
00:06:43.321  "md_size": 16,
00:06:43.321  "md_interleave": true,
00:06:43.321  "dif_type": 1,
00:06:43.321  "dif_is_head_of_md": false,
00:06:43.321  "enabled_dif_check_types": {
00:06:43.321  "reftag": true,
00:06:43.321  "apptag": false,
00:06:43.321  "guard": true
00:06:43.321  },
00:06:43.321  "dif_pi_format": 0,
00:06:43.321  "assigned_rate_limits": {
00:06:43.321  "rw_ios_per_sec": 0,
00:06:43.321  "rw_mbytes_per_sec": 0,
00:06:43.321  "r_mbytes_per_sec": 0,
00:06:43.321  "w_mbytes_per_sec": 0
00:06:43.321  },
00:06:43.321  "claimed": false,
00:06:43.321  "zoned": false,
00:06:43.321  "supported_io_types": {
00:06:43.321  "read": true,
00:06:43.321  "write": true,
00:06:43.321  "unmap": true,
00:06:43.321  "flush": true,
00:06:43.321  "reset": true,
00:06:43.321  "nvme_admin": false,
00:06:43.321  "nvme_io": false,
00:06:43.321  "nvme_io_md": false,
00:06:43.321  "write_zeroes": true,
00:06:43.321  "zcopy": true,
00:06:43.321  "get_zone_info": false,
00:06:43.321  "zone_management": false,
00:06:43.321  "zone_append": false,
00:06:43.321  "compare": false,
00:06:43.321  "compare_and_write": false,
00:06:43.321  "abort": true,
00:06:43.321  "seek_hole": false,
00:06:43.321  "seek_data": false,
00:06:43.321  "copy": true,
00:06:43.321  "nvme_iov_md": false
00:06:43.321  },
00:06:43.321  "driver_specific": {}
00:06:43.321  }
00:06:43.321  ]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@629 -- # rpc_cmd bdev_malloc_create -b Malloc_DIF_3 1 512 -m 16 -t 1 -f 0 -i -d
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  Malloc_DIF_3
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@630 -- # waitforbdev Malloc_DIF_3
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@903 -- # local bdev_name=Malloc_DIF_3
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@904 -- # local bdev_timeout=
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@905 -- # local i
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # [[ -z '' ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@906 -- # bdev_timeout=2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b Malloc_DIF_3 -t 2000
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:43.321   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:43.321  [
00:06:43.321  {
00:06:43.321  "name": "Malloc_DIF_3",
00:06:43.321  "aliases": [
00:06:43.321  "5e30f9a5-b617-11ef-9b05-d5e34e08fe3b"
00:06:43.321  ],
00:06:43.321  "product_name": "Malloc disk",
00:06:43.321  "block_size": 528,
00:06:43.321  "num_blocks": 2048,
00:06:43.321  "uuid": "5e30f9a5-b617-11ef-9b05-d5e34e08fe3b",
00:06:43.321  "md_size": 16,
00:06:43.321  "md_interleave": true,
00:06:43.321  "dif_type": 1,
00:06:43.321  "dif_is_head_of_md": true,
00:06:43.321  "enabled_dif_check_types": {
00:06:43.321  "reftag": true,
00:06:43.321  "apptag": false,
00:06:43.321  "guard": true
00:06:43.321  },
00:06:43.321  "dif_pi_format": 0,
00:06:43.321  "assigned_rate_limits": {
00:06:43.321  "rw_ios_per_sec": 0,
00:06:43.322  "rw_mbytes_per_sec": 0,
00:06:43.322  "r_mbytes_per_sec": 0,
00:06:43.322  "w_mbytes_per_sec": 0
00:06:43.322  },
00:06:43.322  "claimed": false,
00:06:43.322  "zoned": false,
00:06:43.322  "supported_io_types": {
00:06:43.322  "read": true,
00:06:43.322  "write": true,
00:06:43.322  "unmap": true,
00:06:43.322  "flush": true,
00:06:43.322  "reset": true,
00:06:43.322  "nvme_admin": false,
00:06:43.322  "nvme_io": false,
00:06:43.322  "nvme_io_md": false,
00:06:43.322  "write_zeroes": true,
00:06:43.322  "zcopy": true,
00:06:43.322  "get_zone_info": false,
00:06:43.322  "zone_management": false,
00:06:43.322  "zone_append": false,
00:06:43.322  "compare": false,
00:06:43.322  "compare_and_write": false,
00:06:43.322  "abort": true,
00:06:43.322  "seek_hole": false,
00:06:43.322  "seek_data": false,
00:06:43.322  "copy": true,
00:06:43.322  "nvme_iov_md": false
00:06:43.322  },
00:06:43.322  "driver_specific": {}
00:06:43.322  }
00:06:43.322  ]
00:06:43.322   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:43.322   10:21:35 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@911 -- # return 0
00:06:43.322   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@633 -- # sleep 10
00:06:43.322   10:21:35 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@632 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests
00:06:43.322  Running I/O for 5 seconds...
00:06:45.624     226432.00 IOPS,   884.50 MiB/s
[2024-12-09T10:21:38.718Z]    176656.00 IOPS,   690.06 MiB/s
[2024-12-09T10:21:39.651Z]    168661.33 IOPS,   658.83 MiB/s
[2024-12-09T10:21:40.586Z]    183544.00 IOPS,   716.97 MiB/s
00:06:48.425                                                                                                  Latency(us)
00:06:48.425  
[2024-12-09T10:21:40.586Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:48.425  Job: Malloc_DIF_1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_1        :       5.00   14723.40      57.51       0.00     0.00    2170.82     360.76    5721.80
00:06:48.425  Job: Malloc_DIF_1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_1        :       5.00   16503.46      64.47       0.00     0.00    1936.66     362.34    3251.59
00:06:48.425  Job: Malloc_DIF_1 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_1        :       5.00   16197.18      63.27       0.00     0.00    1973.25     400.15    3352.42
00:06:48.425  Job: Malloc_DIF_1 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_1        :       5.00   16559.39      64.69       0.00     0.00    1930.14     412.75    3251.59
00:06:48.425  Job: Malloc_DIF_2 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_2        :       5.00   14723.01      57.51       0.00     0.00    2170.53     387.54    5671.39
00:06:48.425  Job: Malloc_DIF_2 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_2        :       5.00   16502.90      64.46       0.00     0.00    1936.35     393.85    3226.39
00:06:48.425  Job: Malloc_DIF_2 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_2        :       5.00   16196.62      63.27       0.00     0.00    1972.97     409.60    3302.01
00:06:48.425  Job: Malloc_DIF_2 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_2        :       5.00   16559.02      64.68       0.00     0.00    1929.79     395.42    3213.79
00:06:48.425  Job: Malloc_DIF_3 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_3        :       5.00   14722.74      57.51       0.00     0.00    2170.17     300.90    5747.00
00:06:48.425  Job: Malloc_DIF_3 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_3        :       5.00   16502.28      64.46       0.00     0.00    1936.05     352.89    3201.18
00:06:48.425  Job: Malloc_DIF_3 (Core Mask 0x4, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_3        :       5.00   16201.07      63.29       0.00     0.00    1972.05      57.50    3327.21
00:06:48.425  Job: Malloc_DIF_3 (Core Mask 0x8, workload: randrw, percentage: 50, depth: 32, IO size: 4096)
00:06:48.425  	 Malloc_DIF_3        :       5.00   16558.65      64.68       0.00     0.00    1929.48     359.19    3226.39
00:06:48.425  
[2024-12-09T10:21:40.586Z]  ===================================================================================================================
00:06:48.425  
[2024-12-09T10:21:40.586Z]  Total                       :             191949.71     749.80       0.00     0.00    1997.76      57.50    5747.00
00:06:48.425  {
00:06:48.425    "results": [
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_1",
00:06:48.425        "core_mask": "0x1",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.003193,
00:06:48.425        "iops": 14723.397638268201,
00:06:48.425        "mibps": 57.51327202448516,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 2170.824116646512,
00:06:48.425        "min_latency_us": 360.76317974058315,
00:06:48.425        "max_latency_us": 5721.798553789511
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_1",
00:06:48.425        "core_mask": "0x2",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.002587,
00:06:48.425        "iops": 16503.461109222088,
00:06:48.425        "mibps": 64.46664495789878,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1936.657458327817,
00:06:48.425        "min_latency_us": 362.3385648049525,
00:06:48.425        "max_latency_us": 3251.5947728583565
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_1",
00:06:48.425        "core_mask": "0x4",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.002352,
00:06:48.425        "iops": 16197.18084612998,
00:06:48.425        "mibps": 63.27023768019524,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1973.2500945187,
00:06:48.425        "min_latency_us": 400.14780634981713,
00:06:48.425        "max_latency_us": 3352.4194169779953
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_1",
00:06:48.425        "core_mask": "0x8",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.003083,
00:06:48.425        "iops": 16559.389480446356,
00:06:48.425        "mibps": 64.68511515799358,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1930.1391221548963,
00:06:48.425        "min_latency_us": 412.750886864772,
00:06:48.425        "max_latency_us": 3251.5947728583565
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_2",
00:06:48.425        "core_mask": "0x1",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.003326,
00:06:48.425        "iops": 14723.00625623835,
00:06:48.425        "mibps": 57.511743188431055,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 2170.5312982779556,
00:06:48.425        "min_latency_us": 387.5447258348622,
00:06:48.425        "max_latency_us": 5671.386231729692
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_2",
00:06:48.425        "core_mask": "0x2",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.002757,
00:06:48.425        "iops": 16502.900300774152,
00:06:48.425        "mibps": 64.46445429989903,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1936.3525327787786,
00:06:48.425        "min_latency_us": 393.8462660923397,
00:06:48.425        "max_latency_us": 3226.3886118284468
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_2",
00:06:48.425        "core_mask": "0x4",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.002526,
00:06:48.425        "iops": 16196.617468854734,
00:06:48.425        "mibps": 63.268036987713806,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1972.9698757002302,
00:06:48.425        "min_latency_us": 409.60011673603327,
00:06:48.425        "max_latency_us": 3302.007094918176
00:06:48.425      },
00:06:48.425      {
00:06:48.425        "job": "Malloc_DIF_2",
00:06:48.425        "core_mask": "0x8",
00:06:48.425        "workload": "randrw",
00:06:48.425        "percentage": 50,
00:06:48.425        "status": "finished",
00:06:48.425        "queue_depth": 32,
00:06:48.425        "io_size": 4096,
00:06:48.425        "runtime": 5.003196,
00:06:48.425        "iops": 16559.015477306904,
00:06:48.425        "mibps": 64.6836542082301,
00:06:48.425        "io_failed": 0,
00:06:48.425        "io_timeout": 0,
00:06:48.425        "avg_latency_us": 1929.7945637181645,
00:06:48.425        "min_latency_us": 395.42165115670906,
00:06:48.425        "max_latency_us": 3213.785531313492
00:06:48.426      },
00:06:48.426      {
00:06:48.426        "job": "Malloc_DIF_3",
00:06:48.426        "core_mask": "0x1",
00:06:48.426        "workload": "randrw",
00:06:48.426        "percentage": 50,
00:06:48.426        "status": "finished",
00:06:48.426        "queue_depth": 32,
00:06:48.426        "io_size": 4096,
00:06:48.426        "runtime": 5.003416,
00:06:48.426        "iops": 14722.741423059766,
00:06:48.426        "mibps": 57.51070868382721,
00:06:48.426        "io_failed": 0,
00:06:48.426        "io_timeout": 0,
00:06:48.426        "avg_latency_us": 2170.1682159048787,
00:06:48.426        "min_latency_us": 300.89854729454754,
00:06:48.426        "max_latency_us": 5747.004714819421
00:06:48.426      },
00:06:48.426      {
00:06:48.426        "job": "Malloc_DIF_3",
00:06:48.426        "core_mask": "0x2",
00:06:48.426        "workload": "randrw",
00:06:48.426        "percentage": 50,
00:06:48.426        "status": "finished",
00:06:48.426        "queue_depth": 32,
00:06:48.426        "io_size": 4096,
00:06:48.426        "runtime": 5.002946,
00:06:48.426        "iops": 16502.276858474987,
00:06:48.426        "mibps": 64.46201897841792,
00:06:48.426        "io_failed": 0,
00:06:48.426        "io_timeout": 0,
00:06:48.426        "avg_latency_us": 1936.0461283980721,
00:06:48.426        "min_latency_us": 352.88625441873637,
00:06:48.426        "max_latency_us": 3201.182450798537
00:06:48.426      },
00:06:48.426      {
00:06:48.426        "job": "Malloc_DIF_3",
00:06:48.426        "core_mask": "0x4",
00:06:48.426        "workload": "randrw",
00:06:48.426        "percentage": 50,
00:06:48.426        "status": "finished",
00:06:48.426        "queue_depth": 32,
00:06:48.426        "io_size": 4096,
00:06:48.426        "runtime": 5.003126,
00:06:48.426        "iops": 16201.071090354311,
00:06:48.426        "mibps": 63.28543394669653,
00:06:48.426        "io_failed": 0,
00:06:48.426        "io_timeout": 0,
00:06:48.426        "avg_latency_us": 1972.0495838689883,
00:06:48.426        "min_latency_us": 57.50155484948159,
00:06:48.426        "max_latency_us": 3327.2132559480856
00:06:48.426      },
00:06:48.426      {
00:06:48.426        "job": "Malloc_DIF_3",
00:06:48.426        "core_mask": "0x8",
00:06:48.426        "workload": "randrw",
00:06:48.426        "percentage": 50,
00:06:48.426        "status": "finished",
00:06:48.426        "queue_depth": 32,
00:06:48.426        "io_size": 4096,
00:06:48.426        "runtime": 5.003307,
00:06:48.426        "iops": 16558.648110139955,
00:06:48.426        "mibps": 64.6822191802342,
00:06:48.426        "io_failed": 0,
00:06:48.426        "io_timeout": 0,
00:06:48.426        "avg_latency_us": 1929.4798498987861,
00:06:48.426        "min_latency_us": 359.1877946762138,
00:06:48.426        "max_latency_us": 3226.3886118284468
00:06:48.426      }
00:06:48.426    ],
00:06:48.426    "core_count": 4
00:06:48.426  }
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@636 -- # kill -0 49771
00:06:53.680  Process is existed. Pid: 49771
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@637 -- # echo 'Process is existed. Pid: 49771'
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@643 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_1
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@644 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_2
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@645 -- # rpc_cmd bdev_malloc_delete Malloc_DIF_3
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@563 -- # xtrace_disable
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@646 -- # killprocess 49771
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@954 -- # '[' -z 49771 ']'
00:06:53.680   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@958 -- # kill -0 49771
00:06:53.680    10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@959 -- # uname
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:06:53.681    10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@962 -- # ps -c -o command 49771
00:06:53.681    10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@962 -- # tail -1
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@962 -- # process_name=bdevperf
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@964 -- # '[' bdevperf = sudo ']'
00:06:53.681  killing process with pid 49771
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@972 -- # echo 'killing process with pid 49771'
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@973 -- # kill 49771
00:06:53.681  Received shutdown signal, test time was about 5.000000 seconds
00:06:53.681  
00:06:53.681                                                                                                  Latency(us)
00:06:53.681  
[2024-12-09T10:21:45.842Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:53.681  
[2024-12-09T10:21:45.842Z]  ===================================================================================================================
00:06:53.681  
[2024-12-09T10:21:45.842Z]  Total                       :                  0.00       0.00       0.00     0.00       0.00       0.00       0.00
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@978 -- # wait 49771
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- bdev/blockdev.sh@647 -- # trap - SIGINT SIGTERM EXIT
00:06:53.681  
00:06:53.681  real	0m11.197s
00:06:53.681  user	0m43.685s
00:06:53.681  sys	0m0.412s
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:53.681   10:21:45 blockdev_general.bdev_dif_insert_strip -- common/autotest_common.sh@10 -- # set +x
00:06:53.681  ************************************
00:06:53.681  END TEST bdev_dif_insert_strip
00:06:53.681  ************************************
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@832 -- # [[ bdev == gpt ]]
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@836 -- # [[ bdev == crypto_sw ]]
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@849 -- # cleanup
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]]
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]]
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]]
00:06:53.681   10:21:45 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]]
00:06:53.681  
00:06:53.681  real	1m37.296s
00:06:53.681  user	5m10.200s
00:06:53.681  sys	0m21.297s
00:06:53.681   10:21:45 blockdev_general -- common/autotest_common.sh@1130 -- # xtrace_disable
00:06:53.681  ************************************
00:06:53.681  END TEST blockdev_general
00:06:53.681  ************************************
00:06:53.681   10:21:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x
00:06:53.681   10:21:45  -- spdk/autotest.sh@181 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:06:53.681   10:21:45  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:06:53.681   10:21:45  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:06:53.681   10:21:45  -- common/autotest_common.sh@10 -- # set +x
00:06:53.681  ************************************
00:06:53.681  START TEST bdevperf_config
00:06:53.681  ************************************
00:06:53.681   10:21:45 bdevperf_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh
00:06:53.681  * Looking for test storage...
00:06:53.681  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:06:53.681     10:21:45 bdevperf_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:06:53.681     10:21:45 bdevperf_config -- common/autotest_common.sh@1711 -- # lcov --version
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@333 -- # local ver1 ver1_l
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@334 -- # local ver2 ver2_l
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@336 -- # IFS=.-:
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@336 -- # read -ra ver1
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@337 -- # IFS=.-:
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@337 -- # read -ra ver2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@338 -- # local 'op=<'
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@340 -- # ver1_l=2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@341 -- # ver2_l=1
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@344 -- # case "$op" in
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@345 -- # : 1
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@364 -- # (( v = 0 ))
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@365 -- # decimal 1
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@353 -- # local d=1
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@355 -- # echo 1
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@365 -- # ver1[v]=1
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@366 -- # decimal 2
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@353 -- # local d=2
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:06:53.681     10:21:45 bdevperf_config -- scripts/common.sh@355 -- # echo 2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@366 -- # ver2[v]=2
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:06:53.681    10:21:45 bdevperf_config -- scripts/common.sh@368 -- # return 0
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:06:53.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.681  		--rc genhtml_branch_coverage=1
00:06:53.681  		--rc genhtml_function_coverage=1
00:06:53.681  		--rc genhtml_legend=1
00:06:53.681  		--rc geninfo_all_blocks=1
00:06:53.681  		--rc geninfo_unexecuted_blocks=1
00:06:53.681  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:06:53.681  		'
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:06:53.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.681  		--rc genhtml_branch_coverage=1
00:06:53.681  		--rc genhtml_function_coverage=1
00:06:53.681  		--rc genhtml_legend=1
00:06:53.681  		--rc geninfo_all_blocks=1
00:06:53.681  		--rc geninfo_unexecuted_blocks=1
00:06:53.681  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:06:53.681  		'
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:06:53.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.681  		--rc genhtml_branch_coverage=1
00:06:53.681  		--rc genhtml_function_coverage=1
00:06:53.681  		--rc genhtml_legend=1
00:06:53.681  		--rc geninfo_all_blocks=1
00:06:53.681  		--rc geninfo_unexecuted_blocks=1
00:06:53.681  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:06:53.681  		'
00:06:53.681    10:21:45 bdevperf_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:06:53.681  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:06:53.681  		--rc genhtml_branch_coverage=1
00:06:53.681  		--rc genhtml_function_coverage=1
00:06:53.681  		--rc genhtml_legend=1
00:06:53.681  		--rc geninfo_all_blocks=1
00:06:53.681  		--rc geninfo_unexecuted_blocks=1
00:06:53.681  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:06:53.681  		'
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh
00:06:53.681    10:21:45 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:06:53.681  
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:06:53.681  
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:06:53.681  
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:53.681   10:21:45 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:06:53.682  
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:06:53.682  
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:53.682   10:21:45 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:53.682    10:21:45 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:06:56.219   10:21:48 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-12-09 10:21:45.767361] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:56.219  [2024-12-09 10:21:45.767493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:56.219  Using job config with 4 jobs
00:06:56.219  EAL: TSC is not safe to use in SMP mode
00:06:56.219  EAL: TSC is not invariant
00:06:56.219  [2024-12-09 10:21:46.038939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.219  [2024-12-09 10:21:46.065597] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:56.219  [2024-12-09 10:21:46.065680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.219  cpumask for '\''job0'\'' is too big
00:06:56.219  cpumask for '\''job1'\'' is too big
00:06:56.219  cpumask for '\''job2'\'' is too big
00:06:56.219  cpumask for '\''job3'\'' is too big
00:06:56.219  Running I/O for 2 seconds...
00:06:56.219    1203200.00 IOPS,  1175.00 MiB/s
[2024-12-09T10:21:48.380Z]   1207296.00 IOPS,  1179.00 MiB/s
00:06:56.219                                                                                                  Latency(us)
00:06:56.219  
[2024-12-09T10:21:48.380Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301772.34     294.70       0.00     0.00     848.04     191.41    1537.58
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301757.87     294.69       0.00     0.00     847.92     186.68    1348.53
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301744.16     294.67       0.00     0.00     847.79     203.22    1279.21
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301730.90     294.66       0.00     0.00     847.67     185.90    1417.85
00:06:56.219  
[2024-12-09T10:21:48.380Z]  ===================================================================================================================
00:06:56.219  
[2024-12-09T10:21:48.380Z]  Total                       :            1207005.27    1178.72       0.00     0.00     847.85     185.90    1537.58'
00:06:56.219    10:21:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-12-09 10:21:45.767361] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:56.219  [2024-12-09 10:21:45.767493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:56.219  Using job config with 4 jobs
00:06:56.219  EAL: TSC is not safe to use in SMP mode
00:06:56.219  EAL: TSC is not invariant
00:06:56.219  [2024-12-09 10:21:46.038939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.219  [2024-12-09 10:21:46.065597] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:56.219  [2024-12-09 10:21:46.065680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.219  cpumask for '\''job0'\'' is too big
00:06:56.219  cpumask for '\''job1'\'' is too big
00:06:56.219  cpumask for '\''job2'\'' is too big
00:06:56.219  cpumask for '\''job3'\'' is too big
00:06:56.219  Running I/O for 2 seconds...
00:06:56.219    1203200.00 IOPS,  1175.00 MiB/s
[2024-12-09T10:21:48.380Z]   1207296.00 IOPS,  1179.00 MiB/s
00:06:56.219                                                                                                  Latency(us)
00:06:56.219  
[2024-12-09T10:21:48.380Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301772.34     294.70       0.00     0.00     848.04     191.41    1537.58
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301757.87     294.69       0.00     0.00     847.92     186.68    1348.53
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301744.16     294.67       0.00     0.00     847.79     203.22    1279.21
00:06:56.219  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.219  	 Malloc0             :       2.00  301730.90     294.66       0.00     0.00     847.67     185.90    1417.85
00:06:56.219  
[2024-12-09T10:21:48.380Z]  ===================================================================================================================
00:06:56.219  
[2024-12-09T10:21:48.380Z]  Total                       :            1207005.27    1178.72       0.00     0.00     847.85     185.90    1537.58'
00:06:56.220    10:21:48 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-09 10:21:45.767361] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:56.220  [2024-12-09 10:21:45.767493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:56.220  Using job config with 4 jobs
00:06:56.220  EAL: TSC is not safe to use in SMP mode
00:06:56.220  EAL: TSC is not invariant
00:06:56.220  [2024-12-09 10:21:46.038939] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.220  [2024-12-09 10:21:46.065597] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:56.220  [2024-12-09 10:21:46.065680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.220  cpumask for '\''job0'\'' is too big
00:06:56.220  cpumask for '\''job1'\'' is too big
00:06:56.220  cpumask for '\''job2'\'' is too big
00:06:56.220  cpumask for '\''job3'\'' is too big
00:06:56.220  Running I/O for 2 seconds...
00:06:56.220    1203200.00 IOPS,  1175.00 MiB/s
[2024-12-09T10:21:48.381Z]   1207296.00 IOPS,  1179.00 MiB/s
00:06:56.220                                                                                                  Latency(us)
00:06:56.220  
[2024-12-09T10:21:48.381Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:56.220  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.220  	 Malloc0             :       2.00  301772.34     294.70       0.00     0.00     848.04     191.41    1537.58
00:06:56.220  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.220  	 Malloc0             :       2.00  301757.87     294.69       0.00     0.00     847.92     186.68    1348.53
00:06:56.220  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.220  	 Malloc0             :       2.00  301744.16     294.67       0.00     0.00     847.79     203.22    1279.21
00:06:56.220  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:56.220  	 Malloc0             :       2.00  301730.90     294.66       0.00     0.00     847.67     185.90    1417.85
00:06:56.220  
[2024-12-09T10:21:48.381Z]  ===================================================================================================================
00:06:56.220  
[2024-12-09T10:21:48.381Z]  Total                       :            1207005.27    1178.72       0.00     0.00     847.85     185.90    1537.58'
00:06:56.220    10:21:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:06:56.220    10:21:48 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:06:56.220   10:21:48 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]]
00:06:56.220    10:21:48 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:06:56.220  [2024-12-09 10:21:48.201800] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:06:56.220  [2024-12-09 10:21:48.201971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:06:56.479  EAL: TSC is not safe to use in SMP mode
00:06:56.479  EAL: TSC is not invariant
00:06:56.479  [2024-12-09 10:21:48.505416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:06:56.479  [2024-12-09 10:21:48.533972] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:06:56.479  [2024-12-09 10:21:48.534076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:06:56.479  cpumask for 'job0' is too big
00:06:56.479  cpumask for 'job1' is too big
00:06:56.479  cpumask for 'job2' is too big
00:06:56.479  cpumask for 'job3' is too big
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs
00:06:59.038  Running I/O for 2 seconds...
00:06:59.038    1210368.00 IOPS,  1182.00 MiB/s
[2024-12-09T10:21:51.199Z]   1211392.00 IOPS,  1183.00 MiB/s
00:06:59.038                                                                                                  Latency(us)
00:06:59.038  
[2024-12-09T10:21:51.199Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:06:59.038  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:59.038  	 Malloc0             :       2.00  302783.05     295.69       0.00     0.00     845.17     193.77    1449.35
00:06:59.038  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:59.038  	 Malloc0             :       2.00  302768.23     295.67       0.00     0.00     845.07     175.66    1285.51
00:06:59.038  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:59.038  	 Malloc0             :       2.00  302754.32     295.66       0.00     0.00     844.95     176.44    1272.91
00:06:59.038  Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024)
00:06:59.038  	 Malloc0             :       2.00  302741.01     295.65       0.00     0.00     844.80     178.02    1411.55
00:06:59.038  
[2024-12-09T10:21:51.199Z]  ===================================================================================================================
00:06:59.038  
[2024-12-09T10:21:51.199Z]  Total                       :            1211046.61    1182.66       0.00     0.00     845.00     175.66    1449.35'
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:06:59.038  
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:06:59.038  
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0
00:06:59.038  
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:06:59.038   10:21:50 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:06:59.038    10:21:50 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:07:00.989   10:21:53 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-12-09 10:21:50.678817] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:00.989  [2024-12-09 10:21:50.678970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:00.989  Using job config with 3 jobs
00:07:00.989  EAL: TSC is not safe to use in SMP mode
00:07:00.989  EAL: TSC is not invariant
00:07:00.989  [2024-12-09 10:21:50.989913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:00.989  [2024-12-09 10:21:51.018788] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:00.989  [2024-12-09 10:21:51.018875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:00.989  cpumask for '\''job0'\'' is too big
00:07:00.989  cpumask for '\''job1'\'' is too big
00:07:00.989  cpumask for '\''job2'\'' is too big
00:07:00.989  Running I/O for 2 seconds...
00:07:00.989    1200384.00 IOPS,  1172.25 MiB/s
[2024-12-09T10:21:53.150Z]   1201536.00 IOPS,  1173.38 MiB/s
00:07:00.989                                                                                                  Latency(us)
00:07:00.989  
[2024-12-09T10:21:53.150Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400442.58     391.06       0.00     0.00     639.02     178.81    1172.09
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400401.83     391.02       0.00     0.00     638.98     173.29    1241.40
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400382.83     391.00       0.00     0.00     638.88     172.50    1247.70
00:07:00.989  
[2024-12-09T10:21:53.150Z]  ===================================================================================================================
00:07:00.989  
[2024-12-09T10:21:53.150Z]  Total                       :            1201227.25    1173.07       0.00     0.00     638.96     172.50    1247.70'
00:07:00.989    10:21:53 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-12-09 10:21:50.678817] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:00.989  [2024-12-09 10:21:50.678970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:00.989  Using job config with 3 jobs
00:07:00.989  EAL: TSC is not safe to use in SMP mode
00:07:00.989  EAL: TSC is not invariant
00:07:00.989  [2024-12-09 10:21:50.989913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:00.989  [2024-12-09 10:21:51.018788] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:00.989  [2024-12-09 10:21:51.018875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:00.989  cpumask for '\''job0'\'' is too big
00:07:00.989  cpumask for '\''job1'\'' is too big
00:07:00.989  cpumask for '\''job2'\'' is too big
00:07:00.989  Running I/O for 2 seconds...
00:07:00.989    1200384.00 IOPS,  1172.25 MiB/s
[2024-12-09T10:21:53.150Z]   1201536.00 IOPS,  1173.38 MiB/s
00:07:00.989                                                                                                  Latency(us)
00:07:00.989  
[2024-12-09T10:21:53.150Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400442.58     391.06       0.00     0.00     639.02     178.81    1172.09
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400401.83     391.02       0.00     0.00     638.98     173.29    1241.40
00:07:00.989  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:00.989  	 Malloc0             :       2.00  400382.83     391.00       0.00     0.00     638.88     172.50    1247.70
00:07:00.989  
[2024-12-09T10:21:53.150Z]  ===================================================================================================================
00:07:00.989  
[2024-12-09T10:21:53.150Z]  Total                       :            1201227.25    1173.07       0.00     0.00     638.96     172.50    1247.70'
00:07:00.989    10:21:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:07:00.989    10:21:53 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:07:01.250    10:21:53 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-09 10:21:50.678817] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:01.250  [2024-12-09 10:21:50.678970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:01.250  Using job config with 3 jobs
00:07:01.250  EAL: TSC is not safe to use in SMP mode
00:07:01.250  EAL: TSC is not invariant
00:07:01.250  [2024-12-09 10:21:50.989913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:01.250  [2024-12-09 10:21:51.018788] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:01.250  [2024-12-09 10:21:51.018875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:01.250  cpumask for '\''job0'\'' is too big
00:07:01.250  cpumask for '\''job1'\'' is too big
00:07:01.250  cpumask for '\''job2'\'' is too big
00:07:01.250  Running I/O for 2 seconds...
00:07:01.250    1200384.00 IOPS,  1172.25 MiB/s
[2024-12-09T10:21:53.411Z]   1201536.00 IOPS,  1173.38 MiB/s
00:07:01.250                                                                                                  Latency(us)
00:07:01.250  
[2024-12-09T10:21:53.411Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:01.250  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:01.250  	 Malloc0             :       2.00  400442.58     391.06       0.00     0.00     639.02     178.81    1172.09
00:07:01.250  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:01.250  	 Malloc0             :       2.00  400401.83     391.02       0.00     0.00     638.98     173.29    1241.40
00:07:01.250  Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024)
00:07:01.250  	 Malloc0             :       2.00  400382.83     391.00       0.00     0.00     638.88     172.50    1247.70
00:07:01.250  
[2024-12-09T10:21:53.411Z]  ===================================================================================================================
00:07:01.250  
[2024-12-09T10:21:53.411Z]  Total                       :            1201227.25    1173.07       0.00     0.00     638.96     172.50    1247.70'
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]]
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]]
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@13 -- # cat
00:07:01.250  
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]'
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:07:01.250  
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]]
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]'
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]]
00:07:01.250  
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]'
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]]
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]'
00:07:01.250  
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]]
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]'
00:07:01.250  
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@19 -- # echo
00:07:01.250   10:21:53 bdevperf_config -- bdevperf/common.sh@20 -- # cat
00:07:01.250    10:21:53 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:07:03.788   10:21:55 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-12-09 10:21:53.175672] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:03.788  [2024-12-09 10:21:53.175803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:03.788  Using job config with 4 jobs
00:07:03.788  EAL: TSC is not safe to use in SMP mode
00:07:03.788  EAL: TSC is not invariant
00:07:03.788  [2024-12-09 10:21:53.478944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:03.788  [2024-12-09 10:21:53.506378] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:03.788  [2024-12-09 10:21:53.506470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:03.788  cpumask for '\''job0'\'' is too big
00:07:03.788  cpumask for '\''job1'\'' is too big
00:07:03.788  cpumask for '\''job2'\'' is too big
00:07:03.788  cpumask for '\''job3'\'' is too big
00:07:03.788  Running I/O for 2 seconds...
00:07:03.788    1169408.00 IOPS,  1142.00 MiB/s
[2024-12-09T10:21:55.949Z]   1195008.00 IOPS,  1167.00 MiB/s
00:07:03.788                                                                                                  Latency(us)
00:07:03.788  
[2024-12-09T10:21:55.949Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:03.788  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc0             :       2.00  149222.97     145.73       0.00     0.00    1715.20     422.20    3112.96
00:07:03.788  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc1             :       2.00  149216.26     145.72       0.00     0.00    1715.15     475.77    3087.75
00:07:03.788  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc0             :       2.00  149203.45     145.71       0.00     0.00    1714.63     409.60    2659.25
00:07:03.788  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc1             :       2.00  149254.37     145.76       0.00     0.00    1713.88     434.81    2659.25
00:07:03.788  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc0             :       2.00  149240.96     145.74       0.00     0.00    1713.45     401.72    2255.95
00:07:03.788  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc1             :       2.00  149234.93     145.74       0.00     0.00    1713.43     444.26    2243.35
00:07:03.788  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc0             :       2.00  149229.72     145.73       0.00     0.00    1712.93     395.42    2066.91
00:07:03.788  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.788  	 Malloc1             :       2.00  149320.07     145.82       0.00     0.00    1711.72     297.75    2066.91
00:07:03.788  
[2024-12-09T10:21:55.949Z]  ===================================================================================================================
00:07:03.788  
[2024-12-09T10:21:55.949Z]  Total                       :            1193922.73    1165.94       0.00     0.00    1713.80     297.75    3112.96'
00:07:03.788    10:21:55 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-12-09 10:21:53.175672] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:03.788  [2024-12-09 10:21:53.175803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:03.788  Using job config with 4 jobs
00:07:03.788  EAL: TSC is not safe to use in SMP mode
00:07:03.788  EAL: TSC is not invariant
00:07:03.788  [2024-12-09 10:21:53.478944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:03.788  [2024-12-09 10:21:53.506378] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:03.788  [2024-12-09 10:21:53.506470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:03.788  cpumask for '\''job0'\'' is too big
00:07:03.788  cpumask for '\''job1'\'' is too big
00:07:03.788  cpumask for '\''job2'\'' is too big
00:07:03.788  cpumask for '\''job3'\'' is too big
00:07:03.788  Running I/O for 2 seconds...
00:07:03.788    1169408.00 IOPS,  1142.00 MiB/s
[2024-12-09T10:21:55.949Z]   1195008.00 IOPS,  1167.00 MiB/s
00:07:03.788                                                                                                  Latency(us)
00:07:03.788  
[2024-12-09T10:21:55.949Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:03.788  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149222.97     145.73       0.00     0.00    1715.20     422.20    3112.96
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149216.26     145.72       0.00     0.00    1715.15     475.77    3087.75
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149203.45     145.71       0.00     0.00    1714.63     409.60    2659.25
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149254.37     145.76       0.00     0.00    1713.88     434.81    2659.25
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149240.96     145.74       0.00     0.00    1713.45     401.72    2255.95
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149234.93     145.74       0.00     0.00    1713.43     444.26    2243.35
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149229.72     145.73       0.00     0.00    1712.93     395.42    2066.91
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149320.07     145.82       0.00     0.00    1711.72     297.75    2066.91
00:07:03.789  
[2024-12-09T10:21:55.950Z]  ===================================================================================================================
00:07:03.789  
[2024-12-09T10:21:55.950Z]  Total                       :            1193922.73    1165.94       0.00     0.00    1713.80     297.75    3112.96'
00:07:03.789    10:21:55 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-12-09 10:21:53.175672] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:03.789  [2024-12-09 10:21:53.175803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:03.789  Using job config with 4 jobs
00:07:03.789  EAL: TSC is not safe to use in SMP mode
00:07:03.789  EAL: TSC is not invariant
00:07:03.789  [2024-12-09 10:21:53.478944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:03.789  [2024-12-09 10:21:53.506378] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:03.789  [2024-12-09 10:21:53.506470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:03.789  cpumask for '\''job0'\'' is too big
00:07:03.789  cpumask for '\''job1'\'' is too big
00:07:03.789  cpumask for '\''job2'\'' is too big
00:07:03.789  cpumask for '\''job3'\'' is too big
00:07:03.789  Running I/O for 2 seconds...
00:07:03.789    1169408.00 IOPS,  1142.00 MiB/s
[2024-12-09T10:21:55.950Z]   1195008.00 IOPS,  1167.00 MiB/s
00:07:03.789                                                                                                  Latency(us)
00:07:03.789  
[2024-12-09T10:21:55.950Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149222.97     145.73       0.00     0.00    1715.20     422.20    3112.96
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149216.26     145.72       0.00     0.00    1715.15     475.77    3087.75
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149203.45     145.71       0.00     0.00    1714.63     409.60    2659.25
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149254.37     145.76       0.00     0.00    1713.88     434.81    2659.25
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149240.96     145.74       0.00     0.00    1713.45     401.72    2255.95
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149234.93     145.74       0.00     0.00    1713.43     444.26    2243.35
00:07:03.789  Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc0             :       2.00  149229.72     145.73       0.00     0.00    1712.93     395.42    2066.91
00:07:03.789  Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024)
00:07:03.789  	 Malloc1             :       2.00  149320.07     145.82       0.00     0.00    1711.72     297.75    2066.91
00:07:03.789  
[2024-12-09T10:21:55.950Z]  ===================================================================================================================
00:07:03.789  
[2024-12-09T10:21:55.950Z]  Total                       :            1193922.73    1165.94       0.00     0.00    1713.80     297.75    3112.96'
00:07:03.789    10:21:55 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs'
00:07:03.789    10:21:55 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+'
00:07:03.789   10:21:55 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]]
00:07:03.789   10:21:55 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup
00:07:03.789   10:21:55 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf
00:07:03.789   10:21:55 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT
00:07:03.789  
00:07:03.789  real	0m10.043s
00:07:03.789  user	0m8.721s
00:07:03.789  sys	0m1.369s
00:07:03.789   10:21:55 bdevperf_config -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:03.789   10:21:55 bdevperf_config -- common/autotest_common.sh@10 -- # set +x
00:07:03.789  ************************************
00:07:03.789  END TEST bdevperf_config
00:07:03.789  ************************************
00:07:03.789    10:21:55  -- spdk/autotest.sh@182 -- # uname -s
00:07:03.789   10:21:55  -- spdk/autotest.sh@182 -- # [[ FreeBSD == Linux ]]
00:07:03.789   10:21:55  -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]]
00:07:03.789    10:21:55  -- spdk/autotest.sh@194 -- # uname -s
00:07:03.789   10:21:55  -- spdk/autotest.sh@194 -- # [[ FreeBSD == Linux ]]
00:07:03.789   10:21:55  -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']'
00:07:03.789   10:21:55  -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:07:03.789   10:21:55  -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:03.789   10:21:55  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:03.789   10:21:55  -- common/autotest_common.sh@10 -- # set +x
00:07:03.789  ************************************
00:07:03.789  START TEST blockdev_nvme
00:07:03.789  ************************************
00:07:03.789   10:21:55 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme
00:07:03.789  * Looking for test storage...
00:07:03.789  * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:03.789     10:21:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:03.789     10:21:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-:
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-:
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<'
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@345 -- # : 1
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=1
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 1
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@353 -- # local d=2
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:03.789     10:21:55 blockdev_nvme -- scripts/common.sh@355 -- # echo 2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:03.789    10:21:55 blockdev_nvme -- scripts/common.sh@368 -- # return 0
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:03.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:03.789  		--rc genhtml_branch_coverage=1
00:07:03.789  		--rc genhtml_function_coverage=1
00:07:03.789  		--rc genhtml_legend=1
00:07:03.789  		--rc geninfo_all_blocks=1
00:07:03.789  		--rc geninfo_unexecuted_blocks=1
00:07:03.789  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:03.789  		'
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:03.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:03.789  		--rc genhtml_branch_coverage=1
00:07:03.789  		--rc genhtml_function_coverage=1
00:07:03.789  		--rc genhtml_legend=1
00:07:03.789  		--rc geninfo_all_blocks=1
00:07:03.789  		--rc geninfo_unexecuted_blocks=1
00:07:03.789  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:03.789  		'
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:03.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:03.789  		--rc genhtml_branch_coverage=1
00:07:03.789  		--rc genhtml_function_coverage=1
00:07:03.789  		--rc genhtml_legend=1
00:07:03.789  		--rc geninfo_all_blocks=1
00:07:03.789  		--rc geninfo_unexecuted_blocks=1
00:07:03.789  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:03.789  		'
00:07:03.789    10:21:55 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:03.789  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:03.789  		--rc genhtml_branch_coverage=1
00:07:03.789  		--rc genhtml_function_coverage=1
00:07:03.790  		--rc genhtml_legend=1
00:07:03.790  		--rc geninfo_all_blocks=1
00:07:03.790  		--rc geninfo_unexecuted_blocks=1
00:07:03.790  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:03.790  		'
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh
00:07:03.790    10:21:55 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@20 -- # :
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5
00:07:03.790    10:21:55 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' FreeBSD = Linux ']'
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@716 -- # PRE_RESERVED_MEM=2048
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device=
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek=
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx=
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc=
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']'
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]]
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]]
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=50029
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 50029
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 50029 ']'
00:07:03.790  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:03.790   10:21:55 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' ''
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:03.790   10:21:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:03.790  [2024-12-09 10:21:55.861878] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:03.790  [2024-12-09 10:21:55.862067] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:04.047  EAL: TSC is not safe to use in SMP mode
00:07:04.047  EAL: TSC is not invariant
00:07:04.047  [2024-12-09 10:21:56.171623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:04.047  [2024-12-09 10:21:56.201958] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:04.047  [2024-12-09 10:21:56.202004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:04.708   10:21:56 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:04.708   10:21:56 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0
00:07:04.708   10:21:56 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in
00:07:04.708   10:21:56 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf
00:07:04.708   10:21:56 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json
00:07:04.708   10:21:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json
00:07:04.708    10:21:56 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:04.708   10:21:56 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\'''
00:07:04.708   10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.708   10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.708  [2024-12-09 10:21:56.821843] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)'
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name
00:07:04.978    10:21:56 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' '  "name": "Nvme0n1",' '  "aliases": [' '    "6b13eb03-b617-11ef-9b05-d5e34e08fe3b"' '  ],' '  "product_name": "NVMe disk",' '  "block_size": 4096,' '  "num_blocks": 1310720,' '  "uuid": "6b13eb03-b617-11ef-9b05-d5e34e08fe3b",' '  "numa_id": -1,' '  "assigned_rate_limits": {' '    "rw_ios_per_sec": 0,' '    "rw_mbytes_per_sec": 0,' '    "r_mbytes_per_sec": 0,' '    "w_mbytes_per_sec": 0' '  },' '  "claimed": false,' '  "zoned": false,' '  "supported_io_types": {' '    "read": true,' '    "write": true,' '    "unmap": true,' '    "flush": true,' '    "reset": true,' '    "nvme_admin": true,' '    "nvme_io": true,' '    "nvme_io_md": false,' '    "write_zeroes": true,' '    "zcopy": false,' '    "get_zone_info": false,' '    "zone_management": false,' '    "zone_append": false,' '    "compare": true,' '    "compare_and_write": false,' '    "abort": true,' '    "seek_hole": false,' '    "seek_data": false,' '    "copy": true,' '    "nvme_iov_md": false' '  },' '  "driver_specific": {' '    "nvme": [' '      {' '        "pci_address": "0000:00:10.0",' '        "trid": {' '          "trtype": "PCIe",' '          "traddr": "0000:00:10.0"' '        },' '        "ctrlr_data": {' '          "cntlid": 0,' '          "vendor_id": "0x1b36",' '          "model_number": "QEMU NVMe Ctrl",' '          "serial_number": "12340",' '          "firmware_revision": "8.0.0",' '          "subnqn": "nqn.2019-08.org.qemu:12340",' '          "oacs": {' '            "security": 0,' '            "format": 1,' '            "firmware": 0,' '            "ns_manage": 1' '          },' '          "multi_ctrlr": false,' '          "ana_reporting": false' '        },' '        "vs": {' '          "nvme_version": "1.4"' '        },' '        "ns_data": {' '          "id": 1,' '          "can_share": false' '        }' '      }' '    ],' '    "mp_policy": "active_passive"' '  }' '}'
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}")
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT
00:07:04.978   10:21:56 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 50029
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 50029 ']'
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 50029
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@959 -- # uname
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@962 -- # tail -1
00:07:04.978    10:21:56 blockdev_nvme -- common/autotest_common.sh@962 -- # ps -c -o command 50029
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 50029'
00:07:04.978  killing process with pid 50029
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 50029
00:07:04.978   10:21:56 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 50029
00:07:04.978   10:21:57 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT
00:07:04.978   10:21:57 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:07:04.978   10:21:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']'
00:07:04.978   10:21:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:04.978   10:21:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:04.978  ************************************
00:07:04.978  START TEST bdev_hello_world
00:07:04.978  ************************************
00:07:04.978   10:21:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 ''
00:07:04.978  [2024-12-09 10:21:57.103155] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:04.978  [2024-12-09 10:21:57.103287] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:05.236  EAL: TSC is not safe to use in SMP mode
00:07:05.236  EAL: TSC is not invariant
00:07:05.495  [2024-12-09 10:21:57.398860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:05.495  [2024-12-09 10:21:57.428691] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:05.495  [2024-12-09 10:21:57.428777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:05.495  [2024-12-09 10:21:57.485197] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:05.495  [2024-12-09 10:21:57.557574] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application
00:07:05.495  [2024-12-09 10:21:57.557619] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1
00:07:05.495  [2024-12-09 10:21:57.557631] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel
00:07:05.495  [2024-12-09 10:21:57.558503] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev
00:07:05.495  [2024-12-09 10:21:57.559095] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully
00:07:05.495  [2024-12-09 10:21:57.559133] hello_bdev.c:  84:hello_read: *NOTICE*: Reading io
00:07:05.495  [2024-12-09 10:21:57.559246] hello_bdev.c:  65:read_complete: *NOTICE*: Read string from bdev : Hello World!
00:07:05.495  
00:07:05.495  [2024-12-09 10:21:57.559269] hello_bdev.c:  74:read_complete: *NOTICE*: Stopping app
00:07:05.495  
00:07:05.495  real	0m0.548s
00:07:05.495  user	0m0.228s
00:07:05.495  sys	0m0.319s
00:07:05.495   10:21:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:05.495   10:21:57 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x
00:07:05.495  ************************************
00:07:05.495  END TEST bdev_hello_world
00:07:05.495  ************************************
00:07:05.753   10:21:57 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds ''
00:07:05.754   10:21:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']'
00:07:05.754   10:21:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:05.754   10:21:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:05.754  ************************************
00:07:05.754  START TEST bdev_bounds
00:07:05.754  ************************************
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds ''
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=50096
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT
00:07:05.754  Process bdevio pid: 50096
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 50096'
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json ''
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 50096
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 50096 ']'
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:05.754  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:05.754   10:21:57 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:07:05.754  [2024-12-09 10:21:57.685357] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:05.754  [2024-12-09 10:21:57.685511] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:06.012  EAL: TSC is not safe to use in SMP mode
00:07:06.012  EAL: TSC is not invariant
00:07:06.012  [2024-12-09 10:21:57.988146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3
00:07:06.012  [2024-12-09 10:21:58.015946] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:06.012  [2024-12-09 10:21:58.015985] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:06.012  [2024-12-09 10:21:58.015992] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2].
00:07:06.012  [2024-12-09 10:21:58.016100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:06.012  [2024-12-09 10:21:58.016295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:06.012  [2024-12-09 10:21:58.016514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:06.012  [2024-12-09 10:21:58.073972] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests
00:07:06.577  I/O targets:
00:07:06.577    Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB)
00:07:06.577  
00:07:06.577  
00:07:06.577       CUnit - A unit testing framework for C - Version 2.1-3
00:07:06.577       http://cunit.sourceforge.net/
00:07:06.577  
00:07:06.577  
00:07:06.577  Suite: bdevio tests on: Nvme0n1
00:07:06.577    Test: blockdev write read block ...passed
00:07:06.577    Test: blockdev write zeroes read block ...passed
00:07:06.577    Test: blockdev write zeroes read no split ...passed
00:07:06.577    Test: blockdev write zeroes read split ...passed
00:07:06.577    Test: blockdev write zeroes read split partial ...passed
00:07:06.577    Test: blockdev reset ...[2024-12-09 10:21:58.614942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:07:06.577  [2024-12-09 10:21:58.616255] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:07:06.577  passed
00:07:06.577    Test: blockdev write read 8 blocks ...passed
00:07:06.577    Test: blockdev write read size > 128k ...passed
00:07:06.577    Test: blockdev write read invalid size ...passed
00:07:06.577    Test: blockdev write read offset + nbytes == size of blockdev ...passed
00:07:06.577    Test: blockdev write read offset + nbytes > size of blockdev ...passed
00:07:06.577    Test: blockdev write read max offset ...passed
00:07:06.577    Test: blockdev write read 2 blocks on overlapped address offset ...passed
00:07:06.577    Test: blockdev writev readv 8 blocks ...passed
00:07:06.577    Test: blockdev writev readv 30 x 1block ...passed
00:07:06.577    Test: blockdev writev readv block ...passed
00:07:06.577    Test: blockdev writev readv size > 128k ...passed
00:07:06.577    Test: blockdev writev readv size > 128k in two iovs ...passed
00:07:06.577    Test: blockdev comparev and writev ...[2024-12-09 10:21:58.619561] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x1ba027000 len:0x1000
00:07:06.577  [2024-12-09 10:21:58.619602] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1
00:07:06.577  passed
00:07:06.577    Test: blockdev nvme passthru rw ...passed
00:07:06.577    Test: blockdev nvme passthru vendor specific ...passed
00:07:06.577    Test: blockdev nvme admin passthru ...[2024-12-09 10:21:58.619911] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0
00:07:06.577  [2024-12-09 10:21:58.619926] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1
00:07:06.577  passed
00:07:06.577    Test: blockdev copy ...passed
00:07:06.577  
00:07:06.577  Run Summary:    Type  Total    Ran Passed Failed Inactive
00:07:06.577                suites      1      1    n/a      0        0
00:07:06.577                 tests     23     23     23      0        0
00:07:06.577               asserts    152    152    152      0      n/a
00:07:06.577  
00:07:06.577  Elapsed time =    0.039 seconds
00:07:06.577  0
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 50096
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 50096 ']'
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 50096
00:07:06.577    10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:07:06.577    10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # tail -1
00:07:06.577    10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # ps -c -o command 50096
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # process_name=bdevio
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' bdevio = sudo ']'
00:07:06.577  killing process with pid 50096
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 50096'
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 50096
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 50096
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT
00:07:06.577  
00:07:06.577  real	0m1.054s
00:07:06.577  user	0m2.397s
00:07:06.577  sys	0m0.410s
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:06.577   10:21:58 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x
00:07:06.577  ************************************
00:07:06.577  END TEST bdev_bounds
00:07:06.577  ************************************
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:06.835  ************************************
00:07:06.835  START TEST bdev_nbd
00:07:06.835  ************************************
00:07:06.835   10:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 ''
00:07:06.835    10:21:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s
00:07:06.835   10:21:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ FreeBSD == Linux ]]
00:07:06.835   10:21:58 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # return 0
00:07:06.835  
00:07:06.835  real	0m0.003s
00:07:06.835  user	0m0.002s
00:07:06.835  sys	0m0.001s
00:07:06.835   10:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:06.835   10:21:58 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x
00:07:06.835  ************************************
00:07:06.835  END TEST bdev_nbd
00:07:06.835  ************************************
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]]
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']'
00:07:06.835  skipping fio tests on NVMe due to multi-ns failures.
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.'
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT
00:07:06.835   10:21:58 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:06.835   10:21:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:06.835  ************************************
00:07:06.835  START TEST bdev_verify
00:07:06.835  ************************************
00:07:06.835   10:21:58 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 ''
00:07:06.835  [2024-12-09 10:21:58.806055] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:06.835  [2024-12-09 10:21:58.806281] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:07.092  EAL: TSC is not safe to use in SMP mode
00:07:07.092  EAL: TSC is not invariant
00:07:07.093  [2024-12-09 10:21:59.110389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:07.093  [2024-12-09 10:21:59.136487] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:07.093  [2024-12-09 10:21:59.136523] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:07.093  [2024-12-09 10:21:59.136646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:07.093  [2024-12-09 10:21:59.136634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:07.093  [2024-12-09 10:21:59.193987] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:07.350  Running I/O for 5 seconds...
00:07:09.214      66005.00 IOPS,   257.83 MiB/s
[2024-12-09T10:22:02.307Z]     67728.00 IOPS,   264.56 MiB/s
[2024-12-09T10:22:03.679Z]     67299.00 IOPS,   262.89 MiB/s
[2024-12-09T10:22:04.611Z]     68890.25 IOPS,   269.10 MiB/s
[2024-12-09T10:22:04.611Z]     69269.00 IOPS,   270.58 MiB/s
00:07:12.450                                                                                                  Latency(us)
00:07:12.450  
[2024-12-09T10:22:04.611Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:12.450  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096)
00:07:12.450  	 Verification LBA range: start 0x0 length 0xa0000
00:07:12.450  	 Nvme0n1             :       5.00   34734.82     135.68       0.00     0.00    3676.31     389.12    8418.86
00:07:12.450  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096)
00:07:12.450  	 Verification LBA range: start 0xa0000 length 0xa0000
00:07:12.450  	 Nvme0n1             :       5.00   34511.19     134.81       0.00     0.00    3699.78     374.94   10384.94
00:07:12.450  
[2024-12-09T10:22:04.611Z]  ===================================================================================================================
00:07:12.450  
[2024-12-09T10:22:04.611Z]  Total                       :              69246.01     270.49       0.00     0.00    3688.00     374.94   10384.94
00:07:13.823  
00:07:13.823  real	0m7.009s
00:07:13.823  user	0m13.271s
00:07:13.823  sys	0m0.353s
00:07:13.824   10:22:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:13.824  ************************************
00:07:13.824  END TEST bdev_verify
00:07:13.824  ************************************
00:07:13.824   10:22:05 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x
00:07:13.824   10:22:05 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:07:13.824   10:22:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']'
00:07:13.824   10:22:05 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:13.824   10:22:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:13.824  ************************************
00:07:13.824  START TEST bdev_verify_big_io
00:07:13.824  ************************************
00:07:13.824   10:22:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 ''
00:07:13.824  [2024-12-09 10:22:05.886055] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:13.824  [2024-12-09 10:22:05.886292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:14.082  EAL: TSC is not safe to use in SMP mode
00:07:14.082  EAL: TSC is not invariant
00:07:14.082  [2024-12-09 10:22:06.192877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:14.082  [2024-12-09 10:22:06.219676] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:14.082  [2024-12-09 10:22:06.219717] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:14.082  [2024-12-09 10:22:06.219873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:14.082  [2024-12-09 10:22:06.220124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:14.342  [2024-12-09 10:22:06.278298] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:14.342  Running I/O for 5 seconds...
00:07:16.253      14720.00 IOPS,   920.00 MiB/s
[2024-12-09T10:22:09.790Z]     14806.00 IOPS,   925.38 MiB/s
[2024-12-09T10:22:10.729Z]     14387.33 IOPS,   899.21 MiB/s
[2024-12-09T10:22:11.671Z]     13318.75 IOPS,   832.42 MiB/s
[2024-12-09T10:22:11.671Z]     12777.60 IOPS,   798.60 MiB/s
00:07:19.510                                                                                                  Latency(us)
00:07:19.510  
[2024-12-09T10:22:11.671Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:19.510  Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536)
00:07:19.510  	 Verification LBA range: start 0x0 length 0xa000
00:07:19.510  	 Nvme0n1             :       5.02    6353.19     397.07       0.00     0.00   19968.08     141.78   96791.66
00:07:19.510  Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536)
00:07:19.510  	 Verification LBA range: start 0xa000 length 0xa000
00:07:19.510  	 Nvme0n1             :       5.02    6417.94     401.12       0.00     0.00   19763.10      68.92   96791.66
00:07:19.510  
[2024-12-09T10:22:11.671Z]  ===================================================================================================================
00:07:19.510  
[2024-12-09T10:22:11.671Z]  Total                       :              12771.13     798.20       0.00     0.00   19865.07      68.92   96791.66
00:07:21.424  
00:07:21.424  real	0m7.512s
00:07:21.424  user	0m14.289s
00:07:21.424  sys	0m0.332s
00:07:21.424   10:22:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:21.424  ************************************
00:07:21.424  END TEST bdev_verify_big_io
00:07:21.424  ************************************
00:07:21.424   10:22:13 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x
00:07:21.424   10:22:13 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:21.424   10:22:13 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:07:21.424   10:22:13 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:21.424   10:22:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:21.424  ************************************
00:07:21.424  START TEST bdev_write_zeroes
00:07:21.424  ************************************
00:07:21.424   10:22:13 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:21.424  [2024-12-09 10:22:13.481050] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:21.424  [2024-12-09 10:22:13.481288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:21.684  EAL: TSC is not safe to use in SMP mode
00:07:21.684  EAL: TSC is not invariant
00:07:21.684  [2024-12-09 10:22:13.787956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:21.684  [2024-12-09 10:22:13.814803] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:21.684  [2024-12-09 10:22:13.814887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:21.944  [2024-12-09 10:22:13.871903] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:21.944  Running I/O for 1 seconds...
00:07:22.881      67465.00 IOPS,   263.54 MiB/s
00:07:22.881                                                                                                  Latency(us)
00:07:22.881  
[2024-12-09T10:22:15.042Z]  Device Information          : runtime(s)       IOPS      MiB/s     Fail/s     TO/s    Average        min        max
00:07:22.881  Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096)
00:07:22.881  	 Nvme0n1             :       1.00   67467.43     263.54       0.00     0.00    1896.15     450.56   20164.93
00:07:22.881  
[2024-12-09T10:22:15.042Z]  ===================================================================================================================
00:07:22.881  
[2024-12-09T10:22:15.042Z]  Total                       :              67467.43     263.54       0.00     0.00    1896.15     450.56   20164.93
00:07:22.881  
00:07:22.881  real	0m1.562s
00:07:22.881  user	0m1.219s
00:07:22.881  sys	0m0.330s
00:07:22.881   10:22:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:22.881  ************************************
00:07:22.881  END TEST bdev_write_zeroes
00:07:22.881  ************************************
00:07:22.881   10:22:15 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x
00:07:23.144   10:22:15 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:23.145   10:22:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:07:23.145   10:22:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.145   10:22:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:23.145  ************************************
00:07:23.145  START TEST bdev_json_nonenclosed
00:07:23.145  ************************************
00:07:23.145   10:22:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:23.145  [2024-12-09 10:22:15.116666] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:23.145  [2024-12-09 10:22:15.116800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:23.409  EAL: TSC is not safe to use in SMP mode
00:07:23.409  EAL: TSC is not invariant
00:07:23.409  [2024-12-09 10:22:15.430594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:23.409  [2024-12-09 10:22:15.458852] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:23.409  [2024-12-09 10:22:15.458929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.409  [2024-12-09 10:22:15.458949] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}.
00:07:23.409  [2024-12-09 10:22:15.458957] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:07:23.409  [2024-12-09 10:22:15.458964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:23.409  
00:07:23.409  real	0m0.398s
00:07:23.409  user	0m0.041s
00:07:23.409  sys	0m0.353s
00:07:23.409   10:22:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.409  ************************************
00:07:23.409  END TEST bdev_json_nonenclosed
00:07:23.409  ************************************
00:07:23.409   10:22:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x
00:07:23.409   10:22:15 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:23.409   10:22:15 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']'
00:07:23.409   10:22:15 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:23.409   10:22:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:23.671  ************************************
00:07:23.671  START TEST bdev_json_nonarray
00:07:23.671  ************************************
00:07:23.671   10:22:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 ''
00:07:23.671  [2024-12-09 10:22:15.590310] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:23.671  [2024-12-09 10:22:15.590525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:23.933  EAL: TSC is not safe to use in SMP mode
00:07:23.933  EAL: TSC is not invariant
00:07:23.933  [2024-12-09 10:22:15.902630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1
00:07:23.933  [2024-12-09 10:22:15.930900] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:23.933  [2024-12-09 10:22:15.930983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:23.933  [2024-12-09 10:22:15.931005] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array.
00:07:23.933  [2024-12-09 10:22:15.931013] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 
00:07:23.933  [2024-12-09 10:22:15.931020] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero
00:07:23.933  
00:07:23.933  real	0m0.398s
00:07:23.933  user	0m0.056s
00:07:23.933  sys	0m0.338s
00:07:23.933   10:22:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.933  ************************************
00:07:23.933  END TEST bdev_json_nonarray
00:07:23.933  ************************************
00:07:23.933   10:22:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]]
00:07:23.933   10:22:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]]
00:07:23.933  
00:07:23.933  real	0m20.375s
00:07:23.933  user	0m32.994s
00:07:23.933  sys	0m3.134s
00:07:23.933   10:22:16 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:23.933  ************************************
00:07:23.933  END TEST blockdev_nvme
00:07:23.933  ************************************
00:07:23.933   10:22:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x
00:07:24.194    10:22:16  -- spdk/autotest.sh@209 -- # uname -s
00:07:24.194   10:22:16  -- spdk/autotest.sh@209 -- # [[ FreeBSD == Linux ]]
00:07:24.194   10:22:16  -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:07:24.194   10:22:16  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.194   10:22:16  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.195   10:22:16  -- common/autotest_common.sh@10 -- # set +x
00:07:24.195  ************************************
00:07:24.195  START TEST nvme
00:07:24.195  ************************************
00:07:24.195   10:22:16 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh
00:07:24.195  * Looking for test storage...
00:07:24.195  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:24.195     10:22:16 nvme -- common/autotest_common.sh@1711 -- # lcov --version
00:07:24.195     10:22:16 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:24.195    10:22:16 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:24.195    10:22:16 nvme -- scripts/common.sh@336 -- # IFS=.-:
00:07:24.195    10:22:16 nvme -- scripts/common.sh@336 -- # read -ra ver1
00:07:24.195    10:22:16 nvme -- scripts/common.sh@337 -- # IFS=.-:
00:07:24.195    10:22:16 nvme -- scripts/common.sh@337 -- # read -ra ver2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@338 -- # local 'op=<'
00:07:24.195    10:22:16 nvme -- scripts/common.sh@340 -- # ver1_l=2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@341 -- # ver2_l=1
00:07:24.195    10:22:16 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:24.195    10:22:16 nvme -- scripts/common.sh@344 -- # case "$op" in
00:07:24.195    10:22:16 nvme -- scripts/common.sh@345 -- # : 1
00:07:24.195    10:22:16 nvme -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:24.195    10:22:16 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:24.195     10:22:16 nvme -- scripts/common.sh@365 -- # decimal 1
00:07:24.195     10:22:16 nvme -- scripts/common.sh@353 -- # local d=1
00:07:24.195     10:22:16 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:24.195     10:22:16 nvme -- scripts/common.sh@355 -- # echo 1
00:07:24.195    10:22:16 nvme -- scripts/common.sh@365 -- # ver1[v]=1
00:07:24.195     10:22:16 nvme -- scripts/common.sh@366 -- # decimal 2
00:07:24.195     10:22:16 nvme -- scripts/common.sh@353 -- # local d=2
00:07:24.195     10:22:16 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:24.195     10:22:16 nvme -- scripts/common.sh@355 -- # echo 2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@366 -- # ver2[v]=2
00:07:24.195    10:22:16 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:24.195    10:22:16 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:24.195    10:22:16 nvme -- scripts/common.sh@368 -- # return 0
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:24.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.195  		--rc genhtml_branch_coverage=1
00:07:24.195  		--rc genhtml_function_coverage=1
00:07:24.195  		--rc genhtml_legend=1
00:07:24.195  		--rc geninfo_all_blocks=1
00:07:24.195  		--rc geninfo_unexecuted_blocks=1
00:07:24.195  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:24.195  		'
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:24.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.195  		--rc genhtml_branch_coverage=1
00:07:24.195  		--rc genhtml_function_coverage=1
00:07:24.195  		--rc genhtml_legend=1
00:07:24.195  		--rc geninfo_all_blocks=1
00:07:24.195  		--rc geninfo_unexecuted_blocks=1
00:07:24.195  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:24.195  		'
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:24.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.195  		--rc genhtml_branch_coverage=1
00:07:24.195  		--rc genhtml_function_coverage=1
00:07:24.195  		--rc genhtml_legend=1
00:07:24.195  		--rc geninfo_all_blocks=1
00:07:24.195  		--rc geninfo_unexecuted_blocks=1
00:07:24.195  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:24.195  		'
00:07:24.195    10:22:16 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:24.195  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:24.195  		--rc genhtml_branch_coverage=1
00:07:24.195  		--rc genhtml_function_coverage=1
00:07:24.195  		--rc genhtml_legend=1
00:07:24.195  		--rc geninfo_all_blocks=1
00:07:24.195  		--rc geninfo_unexecuted_blocks=1
00:07:24.195  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:24.195  		'
00:07:24.195   10:22:16 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh
00:07:24.456  hw.nic_uio.bdfs="0:16:0"
00:07:24.456    10:22:16 nvme -- nvme/nvme.sh@79 -- # uname
00:07:24.456   10:22:16 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']'
00:07:24.456   10:22:16 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:07:24.456   10:22:16 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']'
00:07:24.456   10:22:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.456   10:22:16 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:24.456  ************************************
00:07:24.456  START TEST nvme_reset
00:07:24.456  ************************************
00:07:24.456   10:22:16 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5
00:07:24.717  EAL: TSC is not safe to use in SMP mode
00:07:24.717  EAL: TSC is not invariant
00:07:24.717  [2024-12-09 10:22:16.810596] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:24.717  Initializing NVMe Controllers
00:07:24.717  Skipping QEMU NVMe SSD at 0000:00:10.0
00:07:24.717  No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting
00:07:24.717  
00:07:24.717  real	0m0.356s
00:07:24.717  user	0m0.000s
00:07:24.717  sys	0m0.353s
00:07:24.717   10:22:16 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:24.717  ************************************
00:07:24.717  END TEST nvme_reset
00:07:24.717  ************************************
00:07:24.717   10:22:16 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x
00:07:24.977   10:22:16 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify
00:07:24.977   10:22:16 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:24.977   10:22:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:24.977   10:22:16 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:24.977  ************************************
00:07:24.977  START TEST nvme_identify
00:07:24.977  ************************************
00:07:24.977   10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify
00:07:24.977   10:22:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=()
00:07:24.977   10:22:16 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf
00:07:24.977   10:22:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs))
00:07:24.977    10:22:16 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs
00:07:24.977    10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:24.977    10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs
00:07:24.977    10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:24.977     10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:24.977     10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:24.977    10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:07:24.977    10:22:16 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:07:24.977   10:22:16 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0
00:07:25.242  EAL: TSC is not safe to use in SMP mode
00:07:25.242  EAL: TSC is not invariant
00:07:25.242  [2024-12-09 10:22:17.273114] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:25.242  =====================================================
00:07:25.242  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:25.242  =====================================================
00:07:25.242  Controller Capabilities/Features
00:07:25.242  ================================
00:07:25.242  Vendor ID:                             1b36
00:07:25.242  Subsystem Vendor ID:                   1af4
00:07:25.242  Serial Number:                         12340
00:07:25.242  Model Number:                          QEMU NVMe Ctrl
00:07:25.242  Firmware Version:                      8.0.0
00:07:25.242  Recommended Arb Burst:                 6
00:07:25.242  IEEE OUI Identifier:                   00 54 52
00:07:25.242  Multi-path I/O
00:07:25.242    May have multiple subsystem ports:   No
00:07:25.242    May have multiple controllers:       No
00:07:25.242    Associated with SR-IOV VF:           No
00:07:25.242  Max Data Transfer Size:                524288
00:07:25.242  Max Number of Namespaces:              256
00:07:25.242  Max Number of I/O Queues:              64
00:07:25.242  NVMe Specification Version (VS):       1.4
00:07:25.242  NVMe Specification Version (Identify): 1.4
00:07:25.242  Maximum Queue Entries:                 2048
00:07:25.242  Contiguous Queues Required:            Yes
00:07:25.242  Arbitration Mechanisms Supported
00:07:25.242    Weighted Round Robin:                Not Supported
00:07:25.242    Vendor Specific:                     Not Supported
00:07:25.242  Reset Timeout:                         7500 ms
00:07:25.242  Doorbell Stride:                       4 bytes
00:07:25.242  NVM Subsystem Reset:                   Not Supported
00:07:25.242  Command Sets Supported
00:07:25.242    NVM Command Set:                     Supported
00:07:25.242  Boot Partition:                        Not Supported
00:07:25.242  Memory Page Size Minimum:              4096 bytes
00:07:25.242  Memory Page Size Maximum:              65536 bytes
00:07:25.242  Persistent Memory Region:              Not Supported
00:07:25.243  Optional Asynchronous Events Supported
00:07:25.243    Namespace Attribute Notices:         Supported
00:07:25.243    Firmware Activation Notices:         Not Supported
00:07:25.243    ANA Change Notices:                  Not Supported
00:07:25.243    PLE Aggregate Log Change Notices:    Not Supported
00:07:25.243    LBA Status Info Alert Notices:       Not Supported
00:07:25.243    EGE Aggregate Log Change Notices:    Not Supported
00:07:25.243    Normal NVM Subsystem Shutdown event: Not Supported
00:07:25.243    Zone Descriptor Change Notices:      Not Supported
00:07:25.243    Discovery Log Change Notices:        Not Supported
00:07:25.243  Controller Attributes
00:07:25.243    128-bit Host Identifier:             Not Supported
00:07:25.243    Non-Operational Permissive Mode:     Not Supported
00:07:25.243    NVM Sets:                            Not Supported
00:07:25.243    Read Recovery Levels:                Not Supported
00:07:25.243    Endurance Groups:                    Not Supported
00:07:25.243    Predictable Latency Mode:            Not Supported
00:07:25.243    Traffic Based Keep ALive:            Not Supported
00:07:25.243    Namespace Granularity:               Not Supported
00:07:25.243    SQ Associations:                     Not Supported
00:07:25.243    UUID List:                           Not Supported
00:07:25.243    Multi-Domain Subsystem:              Not Supported
00:07:25.243    Fixed Capacity Management:           Not Supported
00:07:25.243    Variable Capacity Management:        Not Supported
00:07:25.243    Delete Endurance Group:              Not Supported
00:07:25.243    Delete NVM Set:                      Not Supported
00:07:25.243    Extended LBA Formats Supported:      Supported
00:07:25.243    Flexible Data Placement Supported:   Not Supported
00:07:25.243  
00:07:25.243  Controller Memory Buffer Support
00:07:25.243  ================================
00:07:25.243  Supported:                             No
00:07:25.243  
00:07:25.243  Persistent Memory Region Support
00:07:25.243  ================================
00:07:25.243  Supported:                             No
00:07:25.243  
00:07:25.243  Admin Command Set Attributes
00:07:25.243  ============================
00:07:25.243  Security Send/Receive:                 Not Supported
00:07:25.243  Format NVM:                            Supported
00:07:25.243  Firmware Activate/Download:            Not Supported
00:07:25.243  Namespace Management:                  Supported
00:07:25.243  Device Self-Test:                      Not Supported
00:07:25.243  Directives:                            Supported
00:07:25.243  NVMe-MI:                               Not Supported
00:07:25.243  Virtualization Management:             Not Supported
00:07:25.243  Doorbell Buffer Config:                Supported
00:07:25.243  Get LBA Status Capability:             Not Supported
00:07:25.243  Command & Feature Lockdown Capability: Not Supported
00:07:25.243  Abort Command Limit:                   4
00:07:25.243  Async Event Request Limit:             4
00:07:25.243  Number of Firmware Slots:              N/A
00:07:25.243  Firmware Slot 1 Read-Only:             N/A
00:07:25.243  Firmware Activation Without Reset:     N/A
00:07:25.243  Multiple Update Detection Support:     N/A
00:07:25.243  Firmware Update Granularity:           No Information Provided
00:07:25.243  Per-Namespace SMART Log:               Yes
00:07:25.243  Asymmetric Namespace Access Log Page:  Not Supported
00:07:25.243  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:07:25.243  Command Effects Log Page:              Supported
00:07:25.243  Get Log Page Extended Data:            Supported
00:07:25.243  Telemetry Log Pages:                   Not Supported
00:07:25.243  Persistent Event Log Pages:            Not Supported
00:07:25.243  Supported Log Pages Log Page:          May Support
00:07:25.243  Commands Supported & Effects Log Page: Not Supported
00:07:25.243  Feature Identifiers & Effects Log Page:May Support
00:07:25.243  NVMe-MI Commands & Effects Log Page:   May Support
00:07:25.243  Data Area 4 for Telemetry Log:         Not Supported
00:07:25.243  Error Log Page Entries Supported:      1
00:07:25.243  Keep Alive:                            Not Supported
00:07:25.243  
00:07:25.243  NVM Command Set Attributes
00:07:25.243  ==========================
00:07:25.243  Submission Queue Entry Size
00:07:25.243    Max:                       64
00:07:25.243    Min:                       64
00:07:25.243  Completion Queue Entry Size
00:07:25.243    Max:                       16
00:07:25.243    Min:                       16
00:07:25.243  Number of Namespaces:        256
00:07:25.243  Compare Command:             Supported
00:07:25.243  Write Uncorrectable Command: Not Supported
00:07:25.243  Dataset Management Command:  Supported
00:07:25.243  Write Zeroes Command:        Supported
00:07:25.243  Set Features Save Field:     Supported
00:07:25.243  Reservations:                Not Supported
00:07:25.243  Timestamp:                   Supported
00:07:25.243  Copy:                        Supported
00:07:25.243  Volatile Write Cache:        Present
00:07:25.243  Atomic Write Unit (Normal):  1
00:07:25.243  Atomic Write Unit (PFail):   1
00:07:25.243  Atomic Compare & Write Unit: 1
00:07:25.243  Fused Compare & Write:       Not Supported
00:07:25.243  Scatter-Gather List
00:07:25.243    SGL Command Set:           Supported
00:07:25.243    SGL Keyed:                 Not Supported
00:07:25.243    SGL Bit Bucket Descriptor: Not Supported
00:07:25.243    SGL Metadata Pointer:      Not Supported
00:07:25.243    Oversized SGL:             Not Supported
00:07:25.243    SGL Metadata Address:      Not Supported
00:07:25.243    SGL Offset:                Not Supported
00:07:25.243    Transport SGL Data Block:  Not Supported
00:07:25.243  Replay Protected Memory Block:  Not Supported
00:07:25.243  
00:07:25.243  Firmware Slot Information
00:07:25.243  =========================
00:07:25.243  Active slot:                 1
00:07:25.243  Slot 1 Firmware Revision:    1.0
00:07:25.243  
00:07:25.243  
00:07:25.243  Commands Supported and Effects
00:07:25.243  ==============================
00:07:25.243  Admin Commands
00:07:25.243  --------------
00:07:25.243     Delete I/O Submission Queue (00h): Supported 
00:07:25.243     Create I/O Submission Queue (01h): Supported 
00:07:25.243                    Get Log Page (02h): Supported 
00:07:25.243     Delete I/O Completion Queue (04h): Supported 
00:07:25.243     Create I/O Completion Queue (05h): Supported 
00:07:25.243                        Identify (06h): Supported 
00:07:25.243                           Abort (08h): Supported 
00:07:25.243                    Set Features (09h): Supported 
00:07:25.243                    Get Features (0Ah): Supported 
00:07:25.243      Asynchronous Event Request (0Ch): Supported 
00:07:25.243            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:25.243                  Directive Send (19h): Supported 
00:07:25.243               Directive Receive (1Ah): Supported 
00:07:25.243       Virtualization Management (1Ch): Supported 
00:07:25.243          Doorbell Buffer Config (7Ch): Supported 
00:07:25.243                      Format NVM (80h): Supported LBA-Change 
00:07:25.243  I/O Commands
00:07:25.243  ------------
00:07:25.243                           Flush (00h): Supported LBA-Change 
00:07:25.243                           Write (01h): Supported LBA-Change 
00:07:25.243                            Read (02h): Supported 
00:07:25.243                         Compare (05h): Supported 
00:07:25.243                    Write Zeroes (08h): Supported LBA-Change 
00:07:25.243              Dataset Management (09h): Supported LBA-Change 
00:07:25.243                         Unknown (0Ch): Supported 
00:07:25.243                         Unknown (12h): Supported 
00:07:25.243                            Copy (19h): Supported LBA-Change 
00:07:25.243                         Unknown (1Dh): Supported LBA-Change 
00:07:25.243  
00:07:25.243  Error Log
00:07:25.243  =========
00:07:25.243  
00:07:25.243  Arbitration
00:07:25.243  ===========
00:07:25.243  Arbitration Burst:           no limit
00:07:25.243  
00:07:25.243  Power Management
00:07:25.243  ================
00:07:25.243  Number of Power States:          1
00:07:25.243  Current Power State:             Power State #0
00:07:25.243  Power State #0:
00:07:25.243    Max Power:                     25.00 W
00:07:25.243    Non-Operational State:         Operational
00:07:25.243    Entry Latency:                 16 microseconds
00:07:25.243    Exit Latency:                  4 microseconds
00:07:25.243    Relative Read Throughput:      0
00:07:25.243    Relative Read Latency:         0
00:07:25.243    Relative Write Throughput:     0
00:07:25.243    Relative Write Latency:        0
00:07:25.243    Idle Power:                     Not Reported
00:07:25.243    Active Power:                   Not Reported
00:07:25.243  Non-Operational Permissive Mode: Not Supported
00:07:25.243  
00:07:25.243  Health Information
00:07:25.243  ==================
00:07:25.243  Critical Warnings:
00:07:25.243    Available Spare Space:     OK
00:07:25.243    Temperature:               OK
00:07:25.243    Device Reliability:        OK
00:07:25.243    Read Only:                 No
00:07:25.243    Volatile Memory Backup:    OK
00:07:25.243  Current Temperature:         323 Kelvin (50 Celsius)
00:07:25.243  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:25.243  Available Spare:             0%
00:07:25.243  Available Spare Threshold:   0%
00:07:25.243  Life Percentage Used:        0%
00:07:25.243  Data Units Read:             10997
00:07:25.243  Data Units Written:          10982
00:07:25.243  Host Read Commands:          410714
00:07:25.243  Host Write Commands:         410588
00:07:25.243  Controller Busy Time:        0 minutes
00:07:25.243  Power Cycles:                0
00:07:25.243  Power On Hours:              0 hours
00:07:25.243  Unsafe Shutdowns:            0
00:07:25.243  Unrecoverable Media Errors:  0
00:07:25.243  Lifetime Error Log Entries:  0
00:07:25.243  Warning Temperature Time:    0 minutes
00:07:25.243  Critical Temperature Time:   0 minutes
00:07:25.243  
00:07:25.243  Number of Queues
00:07:25.243  ================
00:07:25.243  Number of I/O Submission Queues:      64
00:07:25.243  Number of I/O Completion Queues:      64
00:07:25.243  
00:07:25.243  ZNS Specific Controller Data
00:07:25.243  ============================
00:07:25.243  Zone Append Size Limit:      0
00:07:25.243  
00:07:25.243  
00:07:25.243  Active Namespaces
00:07:25.243  =================
00:07:25.243  Namespace ID:1
00:07:25.243  Error Recovery Timeout:                Unlimited
00:07:25.243  Command Set Identifier:                NVM (00h)
00:07:25.243  Deallocate:                            Supported
00:07:25.243  Deallocated/Unwritten Error:           Supported
00:07:25.243  Deallocated Read Value:                All 0x00
00:07:25.243  Deallocate in Write Zeroes:            Not Supported
00:07:25.243  Deallocated Guard Field:               0xFFFF
00:07:25.243  Flush:                                 Supported
00:07:25.244  Reservation:                           Not Supported
00:07:25.244  Namespace Sharing Capabilities:        Private
00:07:25.244  Size (in LBAs):                        1310720 (5GiB)
00:07:25.244  Capacity (in LBAs):                    1310720 (5GiB)
00:07:25.244  Utilization (in LBAs):                 1310720 (5GiB)
00:07:25.244  Thin Provisioning:                     Not Supported
00:07:25.244  Per-NS Atomic Units:                   No
00:07:25.244  Maximum Single Source Range Length:    128
00:07:25.244  Maximum Copy Length:                   128
00:07:25.244  Maximum Source Range Count:            128
00:07:25.244  NGUID/EUI64 Never Reused:              No
00:07:25.244  Namespace Write Protected:             No
00:07:25.244  Number of LBA Formats:                 8
00:07:25.244  Current LBA Format:                    LBA Format #04
00:07:25.244  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:25.244  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:25.244  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:25.244  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:25.244  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:25.244  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:25.244  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:25.244  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:25.244  
00:07:25.244  NVM Specific Namespace Data
00:07:25.244  ===========================
00:07:25.244  Logical Block Storage Tag Mask:               0
00:07:25.244  Protection Information Capabilities:
00:07:25.244    16b Guard Protection Information Storage Tag Support:  No
00:07:25.244    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:25.244    Storage Tag Check Read Support:                        No
00:07:25.244  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.244   10:22:17 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}"
00:07:25.244   10:22:17 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0
00:07:25.505  EAL: TSC is not safe to use in SMP mode
00:07:25.505  EAL: TSC is not invariant
00:07:25.505  [2024-12-09 10:22:17.613695] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:25.505  =====================================================
00:07:25.505  NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:25.505  =====================================================
00:07:25.505  Controller Capabilities/Features
00:07:25.505  ================================
00:07:25.505  Vendor ID:                             1b36
00:07:25.505  Subsystem Vendor ID:                   1af4
00:07:25.505  Serial Number:                         12340
00:07:25.505  Model Number:                          QEMU NVMe Ctrl
00:07:25.505  Firmware Version:                      8.0.0
00:07:25.505  Recommended Arb Burst:                 6
00:07:25.505  IEEE OUI Identifier:                   00 54 52
00:07:25.505  Multi-path I/O
00:07:25.505    May have multiple subsystem ports:   No
00:07:25.505    May have multiple controllers:       No
00:07:25.505    Associated with SR-IOV VF:           No
00:07:25.505  Max Data Transfer Size:                524288
00:07:25.505  Max Number of Namespaces:              256
00:07:25.505  Max Number of I/O Queues:              64
00:07:25.505  NVMe Specification Version (VS):       1.4
00:07:25.505  NVMe Specification Version (Identify): 1.4
00:07:25.505  Maximum Queue Entries:                 2048
00:07:25.505  Contiguous Queues Required:            Yes
00:07:25.505  Arbitration Mechanisms Supported
00:07:25.505    Weighted Round Robin:                Not Supported
00:07:25.505    Vendor Specific:                     Not Supported
00:07:25.505  Reset Timeout:                         7500 ms
00:07:25.505  Doorbell Stride:                       4 bytes
00:07:25.505  NVM Subsystem Reset:                   Not Supported
00:07:25.505  Command Sets Supported
00:07:25.505    NVM Command Set:                     Supported
00:07:25.505  Boot Partition:                        Not Supported
00:07:25.505  Memory Page Size Minimum:              4096 bytes
00:07:25.505  Memory Page Size Maximum:              65536 bytes
00:07:25.505  Persistent Memory Region:              Not Supported
00:07:25.505  Optional Asynchronous Events Supported
00:07:25.505    Namespace Attribute Notices:         Supported
00:07:25.505    Firmware Activation Notices:         Not Supported
00:07:25.505    ANA Change Notices:                  Not Supported
00:07:25.505    PLE Aggregate Log Change Notices:    Not Supported
00:07:25.505    LBA Status Info Alert Notices:       Not Supported
00:07:25.505    EGE Aggregate Log Change Notices:    Not Supported
00:07:25.505    Normal NVM Subsystem Shutdown event: Not Supported
00:07:25.505    Zone Descriptor Change Notices:      Not Supported
00:07:25.505    Discovery Log Change Notices:        Not Supported
00:07:25.505  Controller Attributes
00:07:25.505    128-bit Host Identifier:             Not Supported
00:07:25.505    Non-Operational Permissive Mode:     Not Supported
00:07:25.505    NVM Sets:                            Not Supported
00:07:25.505    Read Recovery Levels:                Not Supported
00:07:25.505    Endurance Groups:                    Not Supported
00:07:25.505    Predictable Latency Mode:            Not Supported
00:07:25.505    Traffic Based Keep ALive:            Not Supported
00:07:25.505    Namespace Granularity:               Not Supported
00:07:25.505    SQ Associations:                     Not Supported
00:07:25.505    UUID List:                           Not Supported
00:07:25.505    Multi-Domain Subsystem:              Not Supported
00:07:25.505    Fixed Capacity Management:           Not Supported
00:07:25.505    Variable Capacity Management:        Not Supported
00:07:25.505    Delete Endurance Group:              Not Supported
00:07:25.505    Delete NVM Set:                      Not Supported
00:07:25.505    Extended LBA Formats Supported:      Supported
00:07:25.505    Flexible Data Placement Supported:   Not Supported
00:07:25.505  
00:07:25.505  Controller Memory Buffer Support
00:07:25.505  ================================
00:07:25.505  Supported:                             No
00:07:25.505  
00:07:25.505  Persistent Memory Region Support
00:07:25.505  ================================
00:07:25.505  Supported:                             No
00:07:25.505  
00:07:25.505  Admin Command Set Attributes
00:07:25.505  ============================
00:07:25.505  Security Send/Receive:                 Not Supported
00:07:25.505  Format NVM:                            Supported
00:07:25.505  Firmware Activate/Download:            Not Supported
00:07:25.505  Namespace Management:                  Supported
00:07:25.505  Device Self-Test:                      Not Supported
00:07:25.505  Directives:                            Supported
00:07:25.505  NVMe-MI:                               Not Supported
00:07:25.505  Virtualization Management:             Not Supported
00:07:25.505  Doorbell Buffer Config:                Supported
00:07:25.505  Get LBA Status Capability:             Not Supported
00:07:25.505  Command & Feature Lockdown Capability: Not Supported
00:07:25.505  Abort Command Limit:                   4
00:07:25.505  Async Event Request Limit:             4
00:07:25.505  Number of Firmware Slots:              N/A
00:07:25.505  Firmware Slot 1 Read-Only:             N/A
00:07:25.505  Firmware Activation Without Reset:     N/A
00:07:25.505  Multiple Update Detection Support:     N/A
00:07:25.505  Firmware Update Granularity:           No Information Provided
00:07:25.505  Per-Namespace SMART Log:               Yes
00:07:25.505  Asymmetric Namespace Access Log Page:  Not Supported
00:07:25.505  Subsystem NQN:                         nqn.2019-08.org.qemu:12340
00:07:25.505  Command Effects Log Page:              Supported
00:07:25.505  Get Log Page Extended Data:            Supported
00:07:25.505  Telemetry Log Pages:                   Not Supported
00:07:25.505  Persistent Event Log Pages:            Not Supported
00:07:25.505  Supported Log Pages Log Page:          May Support
00:07:25.505  Commands Supported & Effects Log Page: Not Supported
00:07:25.506  Feature Identifiers & Effects Log Page:May Support
00:07:25.506  NVMe-MI Commands & Effects Log Page:   May Support
00:07:25.506  Data Area 4 for Telemetry Log:         Not Supported
00:07:25.506  Error Log Page Entries Supported:      1
00:07:25.506  Keep Alive:                            Not Supported
00:07:25.506  
00:07:25.506  NVM Command Set Attributes
00:07:25.506  ==========================
00:07:25.506  Submission Queue Entry Size
00:07:25.506    Max:                       64
00:07:25.506    Min:                       64
00:07:25.506  Completion Queue Entry Size
00:07:25.506    Max:                       16
00:07:25.506    Min:                       16
00:07:25.506  Number of Namespaces:        256
00:07:25.506  Compare Command:             Supported
00:07:25.506  Write Uncorrectable Command: Not Supported
00:07:25.506  Dataset Management Command:  Supported
00:07:25.506  Write Zeroes Command:        Supported
00:07:25.506  Set Features Save Field:     Supported
00:07:25.506  Reservations:                Not Supported
00:07:25.506  Timestamp:                   Supported
00:07:25.506  Copy:                        Supported
00:07:25.506  Volatile Write Cache:        Present
00:07:25.506  Atomic Write Unit (Normal):  1
00:07:25.506  Atomic Write Unit (PFail):   1
00:07:25.506  Atomic Compare & Write Unit: 1
00:07:25.506  Fused Compare & Write:       Not Supported
00:07:25.506  Scatter-Gather List
00:07:25.506    SGL Command Set:           Supported
00:07:25.506    SGL Keyed:                 Not Supported
00:07:25.506    SGL Bit Bucket Descriptor: Not Supported
00:07:25.506    SGL Metadata Pointer:      Not Supported
00:07:25.506    Oversized SGL:             Not Supported
00:07:25.506    SGL Metadata Address:      Not Supported
00:07:25.506    SGL Offset:                Not Supported
00:07:25.506    Transport SGL Data Block:  Not Supported
00:07:25.506  Replay Protected Memory Block:  Not Supported
00:07:25.506  
00:07:25.506  Firmware Slot Information
00:07:25.506  =========================
00:07:25.506  Active slot:                 1
00:07:25.506  Slot 1 Firmware Revision:    1.0
00:07:25.506  
00:07:25.506  
00:07:25.506  Commands Supported and Effects
00:07:25.506  ==============================
00:07:25.506  Admin Commands
00:07:25.506  --------------
00:07:25.506     Delete I/O Submission Queue (00h): Supported 
00:07:25.506     Create I/O Submission Queue (01h): Supported 
00:07:25.506                    Get Log Page (02h): Supported 
00:07:25.506     Delete I/O Completion Queue (04h): Supported 
00:07:25.506     Create I/O Completion Queue (05h): Supported 
00:07:25.506                        Identify (06h): Supported 
00:07:25.506                           Abort (08h): Supported 
00:07:25.506                    Set Features (09h): Supported 
00:07:25.506                    Get Features (0Ah): Supported 
00:07:25.506      Asynchronous Event Request (0Ch): Supported 
00:07:25.506            Namespace Attachment (15h): Supported NS-Inventory-Change 
00:07:25.506                  Directive Send (19h): Supported 
00:07:25.506               Directive Receive (1Ah): Supported 
00:07:25.506       Virtualization Management (1Ch): Supported 
00:07:25.506          Doorbell Buffer Config (7Ch): Supported 
00:07:25.506                      Format NVM (80h): Supported LBA-Change 
00:07:25.506  I/O Commands
00:07:25.506  ------------
00:07:25.506                           Flush (00h): Supported LBA-Change 
00:07:25.506                           Write (01h): Supported LBA-Change 
00:07:25.506                            Read (02h): Supported 
00:07:25.506                         Compare (05h): Supported 
00:07:25.506                    Write Zeroes (08h): Supported LBA-Change 
00:07:25.506              Dataset Management (09h): Supported LBA-Change 
00:07:25.506                         Unknown (0Ch): Supported 
00:07:25.506                         Unknown (12h): Supported 
00:07:25.506                            Copy (19h): Supported LBA-Change 
00:07:25.506                         Unknown (1Dh): Supported LBA-Change 
00:07:25.506  
00:07:25.506  Error Log
00:07:25.506  =========
00:07:25.506  
00:07:25.506  Arbitration
00:07:25.506  ===========
00:07:25.506  Arbitration Burst:           no limit
00:07:25.506  
00:07:25.506  Power Management
00:07:25.506  ================
00:07:25.506  Number of Power States:          1
00:07:25.506  Current Power State:             Power State #0
00:07:25.506  Power State #0:
00:07:25.506    Max Power:                     25.00 W
00:07:25.506    Non-Operational State:         Operational
00:07:25.506    Entry Latency:                 16 microseconds
00:07:25.506    Exit Latency:                  4 microseconds
00:07:25.506    Relative Read Throughput:      0
00:07:25.506    Relative Read Latency:         0
00:07:25.506    Relative Write Throughput:     0
00:07:25.506    Relative Write Latency:        0
00:07:25.506    Idle Power:                     Not Reported
00:07:25.506    Active Power:                   Not Reported
00:07:25.506  Non-Operational Permissive Mode: Not Supported
00:07:25.506  
00:07:25.506  Health Information
00:07:25.506  ==================
00:07:25.506  Critical Warnings:
00:07:25.506    Available Spare Space:     OK
00:07:25.506    Temperature:               OK
00:07:25.506    Device Reliability:        OK
00:07:25.506    Read Only:                 No
00:07:25.506    Volatile Memory Backup:    OK
00:07:25.506  Current Temperature:         323 Kelvin (50 Celsius)
00:07:25.506  Temperature Threshold:       343 Kelvin (70 Celsius)
00:07:25.506  Available Spare:             0%
00:07:25.506  Available Spare Threshold:   0%
00:07:25.506  Life Percentage Used:        0%
00:07:25.506  Data Units Read:             10997
00:07:25.506  Data Units Written:          10982
00:07:25.506  Host Read Commands:          410714
00:07:25.506  Host Write Commands:         410588
00:07:25.506  Controller Busy Time:        0 minutes
00:07:25.506  Power Cycles:                0
00:07:25.506  Power On Hours:              0 hours
00:07:25.506  Unsafe Shutdowns:            0
00:07:25.506  Unrecoverable Media Errors:  0
00:07:25.506  Lifetime Error Log Entries:  0
00:07:25.506  Warning Temperature Time:    0 minutes
00:07:25.506  Critical Temperature Time:   0 minutes
00:07:25.506  
00:07:25.506  Number of Queues
00:07:25.506  ================
00:07:25.506  Number of I/O Submission Queues:      64
00:07:25.506  Number of I/O Completion Queues:      64
00:07:25.506  
00:07:25.506  ZNS Specific Controller Data
00:07:25.506  ============================
00:07:25.506  Zone Append Size Limit:      0
00:07:25.506  
00:07:25.506  
00:07:25.506  Active Namespaces
00:07:25.506  =================
00:07:25.506  Namespace ID:1
00:07:25.506  Error Recovery Timeout:                Unlimited
00:07:25.506  Command Set Identifier:                NVM (00h)
00:07:25.506  Deallocate:                            Supported
00:07:25.506  Deallocated/Unwritten Error:           Supported
00:07:25.506  Deallocated Read Value:                All 0x00
00:07:25.506  Deallocate in Write Zeroes:            Not Supported
00:07:25.506  Deallocated Guard Field:               0xFFFF
00:07:25.506  Flush:                                 Supported
00:07:25.506  Reservation:                           Not Supported
00:07:25.506  Namespace Sharing Capabilities:        Private
00:07:25.506  Size (in LBAs):                        1310720 (5GiB)
00:07:25.506  Capacity (in LBAs):                    1310720 (5GiB)
00:07:25.506  Utilization (in LBAs):                 1310720 (5GiB)
00:07:25.506  Thin Provisioning:                     Not Supported
00:07:25.506  Per-NS Atomic Units:                   No
00:07:25.506  Maximum Single Source Range Length:    128
00:07:25.506  Maximum Copy Length:                   128
00:07:25.506  Maximum Source Range Count:            128
00:07:25.506  NGUID/EUI64 Never Reused:              No
00:07:25.506  Namespace Write Protected:             No
00:07:25.506  Number of LBA Formats:                 8
00:07:25.506  Current LBA Format:                    LBA Format #04
00:07:25.506  LBA Format #00: Data Size:   512  Metadata Size:     0
00:07:25.506  LBA Format #01: Data Size:   512  Metadata Size:     8
00:07:25.506  LBA Format #02: Data Size:   512  Metadata Size:    16
00:07:25.506  LBA Format #03: Data Size:   512  Metadata Size:    64
00:07:25.506  LBA Format #04: Data Size:  4096  Metadata Size:     0
00:07:25.506  LBA Format #05: Data Size:  4096  Metadata Size:     8
00:07:25.506  LBA Format #06: Data Size:  4096  Metadata Size:    16
00:07:25.506  LBA Format #07: Data Size:  4096  Metadata Size:    64
00:07:25.506  
00:07:25.506  NVM Specific Namespace Data
00:07:25.506  ===========================
00:07:25.506  Logical Block Storage Tag Mask:               0
00:07:25.506  Protection Information Capabilities:
00:07:25.506    16b Guard Protection Information Storage Tag Support:  No
00:07:25.506    16b Guard Protection Information Storage Tag Mask:     Any bit in LBSTM can be 0
00:07:25.506    Storage Tag Check Read Support:                        No
00:07:25.506  Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI
00:07:25.506  
00:07:25.506  real	0m0.731s
00:07:25.506  user	0m0.040s
00:07:25.506  sys	0m0.697s
00:07:25.506   10:22:17 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:25.506   10:22:17 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x
00:07:25.506  ************************************
00:07:25.506  END TEST nvme_identify
00:07:25.506  ************************************
00:07:25.767   10:22:17 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf
00:07:25.767   10:22:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:25.767   10:22:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:25.767   10:22:17 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:25.767  ************************************
00:07:25.767  START TEST nvme_perf
00:07:25.767  ************************************
00:07:25.767   10:22:17 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf
00:07:25.767   10:22:17 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N
00:07:26.029  EAL: TSC is not safe to use in SMP mode
00:07:26.029  EAL: TSC is not invariant
00:07:26.029  [2024-12-09 10:22:18.029802] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:26.975  Initializing NVMe Controllers
00:07:26.975  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:26.975  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:26.975  Initialization complete. Launching workers.
00:07:26.975  ========================================================
00:07:26.975                                                                             Latency(us)
00:07:26.975  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:26.975  PCIE (0000:00:10.0) NSID 1 from core  0:   64766.00     758.98    1978.39     233.25    5822.13
00:07:26.975  ========================================================
00:07:26.975  Total                                  :   64766.00     758.98    1978.39     233.25    5822.13
00:07:26.975  
00:07:26.975  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:26.975  =================================================================================
00:07:26.975    1.00000% :  1115.373us
00:07:26.975   10.00000% :  1417.847us
00:07:26.975   25.00000% :  1556.480us
00:07:26.975   50.00000% :  1802.241us
00:07:26.975   75.00000% :  2331.570us
00:07:26.975   90.00000% :  2797.884us
00:07:26.975   95.00000% :  3112.961us
00:07:26.975   98.00000% :  3352.419us
00:07:26.975   99.00000% :  3730.512us
00:07:26.975   99.50000% :  4108.604us
00:07:26.975   99.90000% :  5016.026us
00:07:26.975   99.99000% :  5545.355us
00:07:26.975   99.99900% :  5822.623us
00:07:26.975   99.99990% :  5822.623us
00:07:26.975   99.99999% :  5822.623us
00:07:26.975  
00:07:26.975  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:26.975  ==============================================================================
00:07:26.975         Range in us     Cumulative    IO count
00:07:26.975    233.157 -   234.732:    0.0015%  (        1)
00:07:26.975    252.062 -   253.637:    0.0031%  (        1)
00:07:26.975    289.871 -   291.446:    0.0108%  (        5)
00:07:26.975    291.446 -   293.022:    0.0139%  (        2)
00:07:26.975    293.022 -   294.597:    0.0154%  (        1)
00:07:26.975    296.172 -   297.748:    0.0170%  (        1)
00:07:26.975    327.680 -   329.255:    0.0185%  (        1)
00:07:26.975    332.406 -   333.982:    0.0216%  (        2)
00:07:26.975    333.982 -   335.557:    0.0247%  (        2)
00:07:26.975    335.557 -   337.132:    0.0278%  (        2)
00:07:26.975    337.132 -   338.708:    0.0293%  (        1)
00:07:26.975    338.708 -   340.283:    0.0309%  (        1)
00:07:26.975    340.283 -   341.859:    0.0324%  (        1)
00:07:26.975    341.859 -   343.434:    0.0355%  (        2)
00:07:26.975    343.434 -   345.009:    0.0386%  (        2)
00:07:26.975    345.009 -   346.585:    0.0401%  (        1)
00:07:26.975    346.585 -   348.160:    0.0417%  (        1)
00:07:26.975    352.886 -   354.462:    0.0432%  (        1)
00:07:26.975    354.462 -   356.037:    0.0463%  (        2)
00:07:26.975    356.037 -   357.612:    0.0479%  (        1)
00:07:26.975    357.612 -   359.188:    0.0494%  (        1)
00:07:26.975    359.188 -   360.763:    0.0525%  (        2)
00:07:26.975    360.763 -   362.339:    0.0540%  (        1)
00:07:26.975    362.339 -   363.914:    0.0556%  (        1)
00:07:26.975    363.914 -   365.489:    0.0587%  (        2)
00:07:26.975    365.489 -   367.065:    0.0602%  (        1)
00:07:26.975    367.065 -   368.640:    0.0618%  (        1)
00:07:26.975    368.640 -   370.215:    0.0664%  (        3)
00:07:26.975    370.215 -   371.791:    0.0679%  (        1)
00:07:26.975    382.819 -   384.394:    0.0695%  (        1)
00:07:26.975    384.394 -   385.969:    0.0710%  (        1)
00:07:26.975    385.969 -   387.545:    0.0726%  (        1)
00:07:26.975    387.545 -   389.120:    0.0741%  (        1)
00:07:26.975    389.120 -   390.695:    0.0787%  (        3)
00:07:26.975    390.695 -   392.271:    0.0818%  (        2)
00:07:26.975    392.271 -   393.846:    0.0880%  (        4)
00:07:26.975    393.846 -   395.422:    0.0926%  (        3)
00:07:26.975    395.422 -   396.997:    0.0957%  (        2)
00:07:26.975    412.751 -   415.902:    0.0988%  (        2)
00:07:26.975    415.902 -   419.052:    0.1019%  (        2)
00:07:26.975    419.052 -   422.203:    0.1050%  (        2)
00:07:26.975    425.354 -   428.505:    0.1065%  (        1)
00:07:26.975    428.505 -   431.656:    0.1173%  (        7)
00:07:26.975    434.806 -   437.957:    0.1189%  (        1)
00:07:26.975    437.957 -   441.108:    0.1204%  (        1)
00:07:26.975    450.560 -   453.711:    0.1220%  (        1)
00:07:26.975    456.862 -   460.012:    0.1235%  (        1)
00:07:26.975    460.012 -   463.163:    0.1251%  (        1)
00:07:26.975    463.163 -   466.314:    0.1266%  (        1)
00:07:26.975    466.314 -   469.465:    0.1328%  (        4)
00:07:26.975    507.274 -   510.425:    0.1359%  (        2)
00:07:26.975    510.425 -   513.576:    0.1420%  (        4)
00:07:26.975    513.576 -   516.726:    0.1498%  (        5)
00:07:26.975    516.726 -   519.877:    0.1575%  (        5)
00:07:26.975    519.877 -   523.028:    0.1590%  (        1)
00:07:26.975    523.028 -   526.179:    0.1606%  (        1)
00:07:26.975    526.179 -   529.329:    0.1637%  (        2)
00:07:26.975    535.631 -   538.782:    0.1652%  (        1)
00:07:26.975    554.536 -   557.686:    0.1668%  (        1)
00:07:26.975    567.139 -   570.289:    0.1683%  (        1)
00:07:26.975    586.043 -   589.194:    0.1776%  (        6)
00:07:26.975    592.345 -   595.496:    0.1837%  (        4)
00:07:26.975    595.496 -   598.646:    0.1853%  (        1)
00:07:26.975    601.797 -   604.948:    0.1884%  (        2)
00:07:26.975    604.948 -   608.099:    0.1945%  (        4)
00:07:26.975    608.099 -   611.249:    0.1992%  (        3)
00:07:26.975    614.400 -   617.551:    0.2023%  (        2)
00:07:26.975    617.551 -   620.702:    0.2038%  (        1)
00:07:26.975    620.702 -   623.852:    0.2054%  (        1)
00:07:26.975    623.852 -   627.003:    0.2100%  (        3)
00:07:26.975    636.456 -   639.606:    0.2115%  (        1)
00:07:26.975    649.059 -   652.209:    0.2146%  (        2)
00:07:26.975    652.209 -   655.360:    0.2239%  (        6)
00:07:26.975    655.360 -   658.511:    0.2440%  (       13)
00:07:26.975    658.511 -   661.662:    0.2470%  (        2)
00:07:26.976    661.662 -   664.812:    0.2517%  (        3)
00:07:26.976    664.812 -   667.963:    0.2579%  (        4)
00:07:26.976    677.416 -   680.566:    0.2594%  (        1)
00:07:26.976    683.717 -   686.868:    0.2609%  (        1)
00:07:26.976    699.471 -   702.622:    0.2625%  (        1)
00:07:26.976    702.622 -   705.773:    0.2640%  (        1)
00:07:26.976    705.773 -   708.923:    0.2671%  (        2)
00:07:26.976    708.923 -   712.074:    0.2733%  (        4)
00:07:26.976    712.074 -   715.225:    0.2779%  (        3)
00:07:26.976    715.225 -   718.376:    0.2810%  (        2)
00:07:26.976    718.376 -   721.526:    0.2856%  (        3)
00:07:26.976    721.526 -   724.677:    0.2934%  (        5)
00:07:26.976    724.677 -   727.828:    0.2949%  (        1)
00:07:26.976    727.828 -   730.979:    0.2980%  (        2)
00:07:26.976    730.979 -   734.129:    0.3011%  (        2)
00:07:26.976    734.129 -   737.280:    0.3088%  (        5)
00:07:26.976    737.280 -   740.431:    0.3150%  (        4)
00:07:26.976    740.431 -   743.582:    0.3320%  (       11)
00:07:26.976    743.582 -   746.733:    0.3520%  (       13)
00:07:26.976    746.733 -   749.883:    0.3598%  (        5)
00:07:26.976    749.883 -   753.034:    0.3644%  (        3)
00:07:26.976    753.034 -   756.185:    0.3706%  (        4)
00:07:26.976    756.185 -   759.336:    0.3752%  (        3)
00:07:26.976    759.336 -   762.486:    0.3798%  (        3)
00:07:26.976    762.486 -   765.637:    0.3829%  (        2)
00:07:26.976    765.637 -   768.788:    0.3922%  (        6)
00:07:26.976    768.788 -   771.939:    0.3937%  (        1)
00:07:26.976    775.089 -   778.240:    0.3968%  (        2)
00:07:26.976    778.240 -   781.391:    0.4014%  (        3)
00:07:26.976    781.391 -   784.542:    0.4045%  (        2)
00:07:26.976    784.542 -   787.693:    0.4092%  (        3)
00:07:26.976    787.693 -   790.843:    0.4138%  (        3)
00:07:26.976    790.843 -   793.994:    0.4169%  (        2)
00:07:26.976    793.994 -   797.145:    0.4231%  (        4)
00:07:26.976    797.145 -   800.296:    0.4308%  (        5)
00:07:26.976    800.296 -   803.446:    0.4370%  (        4)
00:07:26.976    803.446 -   806.597:    0.4400%  (        2)
00:07:26.976    806.597 -   812.899:    0.4539%  (        9)
00:07:26.976    812.899 -   819.200:    0.4570%  (        2)
00:07:26.976    819.200 -   825.502:    0.4601%  (        2)
00:07:26.976    838.105 -   844.406:    0.4617%  (        1)
00:07:26.976    857.009 -   863.311:    0.4663%  (        3)
00:07:26.976    863.311 -   869.613:    0.4709%  (        3)
00:07:26.976    869.613 -   875.914:    0.4725%  (        1)
00:07:26.976    875.914 -   882.216:    0.4756%  (        2)
00:07:26.976    882.216 -   888.517:    0.4786%  (        2)
00:07:26.976    888.517 -   894.819:    0.4833%  (        3)
00:07:26.976    894.819 -   901.120:    0.4895%  (        4)
00:07:26.976    901.120 -   907.422:    0.4956%  (        4)
00:07:26.976    907.422 -   913.723:    0.5018%  (        4)
00:07:26.976    913.723 -   920.025:    0.5095%  (        5)
00:07:26.976    920.025 -   926.326:    0.5358%  (       17)
00:07:26.976    926.326 -   932.628:    0.5574%  (       14)
00:07:26.976    932.628 -   938.929:    0.5682%  (        7)
00:07:26.976    938.929 -   945.231:    0.5775%  (        6)
00:07:26.976    945.231 -   951.533:    0.5836%  (        4)
00:07:26.976    951.533 -   957.834:    0.5929%  (        6)
00:07:26.976    957.834 -   964.136:    0.6068%  (        9)
00:07:26.976    964.136 -   970.437:    0.6192%  (        8)
00:07:26.976    970.437 -   976.739:    0.6346%  (       10)
00:07:26.976    976.739 -   983.040:    0.6516%  (       11)
00:07:26.976    983.040 -   989.342:    0.6624%  (        7)
00:07:26.976    989.342 -   995.643:    0.6732%  (        7)
00:07:26.976    995.643 -  1001.945:    0.6809%  (        5)
00:07:26.976   1001.945 -  1008.246:    0.6948%  (        9)
00:07:26.976   1008.246 -  1014.548:    0.7041%  (        6)
00:07:26.976   1014.548 -  1020.850:    0.7211%  (       11)
00:07:26.976   1020.850 -  1027.151:    0.7334%  (        8)
00:07:26.976   1027.151 -  1033.453:    0.7458%  (        8)
00:07:26.976   1033.453 -  1039.754:    0.7597%  (        9)
00:07:26.976   1039.754 -  1046.056:    0.7720%  (        8)
00:07:26.976   1046.056 -  1052.357:    0.7766%  (        3)
00:07:26.976   1052.357 -  1058.659:    0.7859%  (        6)
00:07:26.976   1058.659 -  1064.960:    0.7983%  (        8)
00:07:26.976   1064.960 -  1071.262:    0.8168%  (       12)
00:07:26.976   1071.262 -  1077.563:    0.8415%  (       16)
00:07:26.976   1077.563 -  1083.865:    0.8616%  (       13)
00:07:26.976   1083.865 -  1090.166:    0.8832%  (       14)
00:07:26.976   1090.166 -  1096.468:    0.9264%  (       28)
00:07:26.976   1096.468 -  1102.770:    0.9542%  (       18)
00:07:26.976   1102.770 -  1109.071:    0.9851%  (       20)
00:07:26.976   1109.071 -  1115.373:    1.0283%  (       28)
00:07:26.976   1115.373 -  1121.674:    1.0731%  (       29)
00:07:26.976   1121.674 -  1127.976:    1.1194%  (       30)
00:07:26.976   1127.976 -  1134.277:    1.1673%  (       31)
00:07:26.976   1134.277 -  1140.579:    1.2121%  (       29)
00:07:26.976   1140.579 -  1146.880:    1.2507%  (       25)
00:07:26.976   1146.880 -  1153.182:    1.2862%  (       23)
00:07:26.976   1153.182 -  1159.483:    1.3248%  (       25)
00:07:26.976   1159.483 -  1165.785:    1.3726%  (       31)
00:07:26.976   1165.785 -  1172.086:    1.4282%  (       36)
00:07:26.976   1172.086 -  1178.388:    1.4745%  (       30)
00:07:26.976   1178.388 -  1184.690:    1.5348%  (       39)
00:07:26.976   1184.690 -  1190.991:    1.5919%  (       37)
00:07:26.976   1190.991 -  1197.293:    1.6753%  (       54)
00:07:26.976   1197.293 -  1203.594:    1.7741%  (       64)
00:07:26.976   1203.594 -  1209.896:    1.8621%  (       57)
00:07:26.976   1209.896 -  1216.197:    1.9578%  (       62)
00:07:26.976   1216.197 -  1222.499:    2.0597%  (       66)
00:07:26.976   1222.499 -  1228.800:    2.1647%  (       68)
00:07:26.976   1228.800 -  1235.102:    2.2666%  (       66)
00:07:26.976   1235.102 -  1241.403:    2.3485%  (       53)
00:07:26.976   1241.403 -  1247.705:    2.4519%  (       67)
00:07:26.976   1247.705 -  1254.007:    2.5631%  (       72)
00:07:26.976   1254.007 -  1260.308:    2.7082%  (       94)
00:07:26.976   1260.308 -  1266.610:    2.8796%  (      111)
00:07:26.976   1266.610 -  1272.911:    3.0726%  (      125)
00:07:26.976   1272.911 -  1279.213:    3.2471%  (      113)
00:07:26.976   1279.213 -  1285.514:    3.4200%  (      112)
00:07:26.976   1285.514 -  1291.816:    3.6269%  (      134)
00:07:26.976   1291.816 -  1298.117:    3.8353%  (      135)
00:07:26.976   1298.117 -  1304.419:    4.0515%  (      140)
00:07:26.976   1304.419 -  1310.720:    4.2661%  (      139)
00:07:26.976   1310.720 -  1317.022:    4.5209%  (      165)
00:07:26.976   1317.022 -  1323.323:    4.7648%  (      158)
00:07:26.976   1323.323 -  1329.625:    5.0196%  (      165)
00:07:26.976   1329.625 -  1335.927:    5.2821%  (      170)
00:07:26.976   1335.927 -  1342.228:    5.5631%  (      182)
00:07:26.976   1342.228 -  1348.530:    5.8580%  (      191)
00:07:26.976   1348.530 -  1354.831:    6.1869%  (      213)
00:07:26.976   1354.831 -  1361.133:    6.5235%  (      218)
00:07:26.976   1361.133 -  1367.434:    6.8971%  (      242)
00:07:26.976   1367.434 -  1373.736:    7.2878%  (      253)
00:07:26.976   1373.736 -  1380.037:    7.6676%  (      246)
00:07:26.976   1380.037 -  1386.339:    8.0706%  (      261)
00:07:26.976   1386.339 -  1392.640:    8.5106%  (      285)
00:07:26.976   1392.640 -  1398.942:    8.9136%  (      261)
00:07:26.976   1398.942 -  1405.243:    9.3521%  (      284)
00:07:26.976   1405.243 -  1411.545:    9.8200%  (      303)
00:07:26.976   1411.545 -  1417.847:   10.3125%  (      319)
00:07:26.976   1417.847 -  1424.148:   10.8375%  (      340)
00:07:26.976   1424.148 -  1430.450:   11.3733%  (      347)
00:07:26.976   1430.450 -  1436.751:   11.9075%  (      346)
00:07:26.976   1436.751 -  1443.053:   12.4973%  (      382)
00:07:26.976   1443.053 -  1449.354:   13.1118%  (      398)
00:07:26.976   1449.354 -  1455.656:   13.7263%  (      398)
00:07:26.976   1455.656 -  1461.957:   14.3470%  (      402)
00:07:26.976   1461.957 -  1468.259:   15.0032%  (      425)
00:07:26.976   1468.259 -  1474.560:   15.6579%  (      424)
00:07:26.976   1474.560 -  1480.862:   16.3435%  (      444)
00:07:26.976   1480.862 -  1487.164:   17.0244%  (      441)
00:07:26.976   1487.164 -  1493.465:   17.7037%  (      440)
00:07:26.976   1493.465 -  1499.767:   18.4464%  (      481)
00:07:26.976   1499.767 -  1506.068:   19.2045%  (      491)
00:07:26.976   1506.068 -  1512.370:   19.9611%  (      490)
00:07:26.976   1512.370 -  1518.671:   20.7238%  (      494)
00:07:26.976   1518.671 -  1524.973:   21.4372%  (      462)
00:07:26.976   1524.973 -  1531.274:   22.1953%  (      491)
00:07:26.976   1531.274 -  1537.576:   22.9364%  (      480)
00:07:26.976   1537.576 -  1543.877:   23.6806%  (      482)
00:07:26.976   1543.877 -  1550.179:   24.4295%  (      485)
00:07:26.976   1550.179 -  1556.480:   25.1830%  (      488)
00:07:26.976   1556.480 -  1562.782:   25.9457%  (      494)
00:07:26.976   1562.782 -  1569.084:   26.7640%  (      530)
00:07:26.976   1569.084 -  1575.385:   27.5716%  (      523)
00:07:26.976   1575.385 -  1581.687:   28.3945%  (      533)
00:07:26.976   1581.687 -  1587.988:   29.1634%  (      498)
00:07:26.976   1587.988 -  1594.290:   29.9247%  (      493)
00:07:26.976   1594.290 -  1600.591:   30.7028%  (      504)
00:07:26.976   1600.591 -  1606.893:   31.4393%  (      477)
00:07:26.976   1606.893 -  1613.194:   32.1758%  (      477)
00:07:26.976   1613.194 -  1625.797:   33.6982%  (      986)
00:07:26.977   1625.797 -  1638.400:   35.1697%  (      953)
00:07:26.977   1638.400 -  1651.004:   36.6905%  (      985)
00:07:26.977   1651.004 -  1663.607:   38.1527%  (      947)
00:07:26.977   1663.607 -  1676.210:   39.5238%  (      888)
00:07:26.977   1676.210 -  1688.813:   40.8795%  (      878)
00:07:26.977   1688.813 -  1701.416:   42.1857%  (      846)
00:07:26.977   1701.416 -  1714.019:   43.2897%  (      715)
00:07:26.977   1714.019 -  1726.622:   44.4106%  (      726)
00:07:26.977   1726.622 -  1739.225:   45.4544%  (      676)
00:07:26.977   1739.225 -  1751.828:   46.5105%  (      684)
00:07:26.977   1751.828 -  1764.431:   47.5357%  (      664)
00:07:26.977   1764.431 -  1777.034:   48.5702%  (      670)
00:07:26.977   1777.034 -  1789.637:   49.5445%  (      631)
00:07:26.977   1789.637 -  1802.241:   50.4632%  (      595)
00:07:26.977   1802.241 -  1814.844:   51.3541%  (      577)
00:07:26.977   1814.844 -  1827.447:   52.1771%  (      533)
00:07:26.977   1827.447 -  1840.050:   52.9800%  (      520)
00:07:26.977   1840.050 -  1852.653:   53.7690%  (      511)
00:07:26.977   1852.653 -  1865.256:   54.5008%  (      474)
00:07:26.977   1865.256 -  1877.859:   55.1647%  (      430)
00:07:26.977   1877.859 -  1890.462:   55.8611%  (      451)
00:07:26.977   1890.462 -  1903.065:   56.5389%  (      439)
00:07:26.977   1903.065 -  1915.668:   57.1272%  (      381)
00:07:26.977   1915.668 -  1928.271:   57.7170%  (      382)
00:07:26.977   1928.271 -  1940.874:   58.3037%  (      380)
00:07:26.977   1940.874 -  1953.477:   58.9337%  (      408)
00:07:26.977   1953.477 -  1966.081:   59.5204%  (      380)
00:07:26.977   1966.081 -  1978.684:   60.0840%  (      365)
00:07:26.977   1978.684 -  1991.287:   60.6723%  (      381)
00:07:26.977   1991.287 -  2003.890:   61.2528%  (      376)
00:07:26.977   2003.890 -  2016.493:   61.8673%  (      398)
00:07:26.977   2016.493 -  2029.096:   62.4880%  (      402)
00:07:26.977   2029.096 -  2041.699:   63.0732%  (      379)
00:07:26.977   2041.699 -  2054.302:   63.6553%  (      377)
00:07:26.977   2054.302 -  2066.905:   64.2853%  (      408)
00:07:26.977   2066.905 -  2079.508:   64.8535%  (      368)
00:07:26.977   2079.508 -  2092.111:   65.3707%  (      335)
00:07:26.977   2092.111 -  2104.714:   65.8571%  (      315)
00:07:26.977   2104.714 -  2117.318:   66.3867%  (      343)
00:07:26.977   2117.318 -  2129.921:   66.9811%  (      385)
00:07:26.977   2129.921 -  2142.524:   67.5401%  (      362)
00:07:26.977   2142.524 -  2155.127:   68.0465%  (      328)
00:07:26.977   2155.127 -  2167.730:   68.5406%  (      320)
00:07:26.977   2167.730 -  2180.333:   69.0053%  (      301)
00:07:26.977   2180.333 -  2192.936:   69.4577%  (      293)
00:07:26.977   2192.936 -  2205.539:   69.9395%  (      312)
00:07:26.977   2205.539 -  2218.142:   70.4011%  (      299)
00:07:26.977   2218.142 -  2230.745:   70.8690%  (      303)
00:07:26.977   2230.745 -  2243.348:   71.3569%  (      316)
00:07:26.977   2243.348 -  2255.951:   71.8402%  (      313)
00:07:26.977   2255.951 -  2268.554:   72.3080%  (      303)
00:07:26.977   2268.554 -  2281.158:   72.8685%  (      363)
00:07:26.977   2281.158 -  2293.761:   73.4382%  (      369)
00:07:26.977   2293.761 -  2306.364:   74.0419%  (      391)
00:07:26.977   2306.364 -  2318.967:   74.5962%  (      359)
00:07:26.977   2318.967 -  2331.570:   75.1336%  (      348)
00:07:26.977   2331.570 -  2344.173:   75.6771%  (      352)
00:07:26.977   2344.173 -  2356.776:   76.2607%  (      378)
00:07:26.977   2356.776 -  2369.379:   76.8026%  (      351)
00:07:26.977   2369.379 -  2381.982:   77.3122%  (      330)
00:07:26.977   2381.982 -  2394.585:   77.8016%  (      317)
00:07:26.977   2394.585 -  2407.188:   78.2710%  (      304)
00:07:26.977   2407.188 -  2419.791:   78.7080%  (      283)
00:07:26.977   2419.791 -  2432.395:   79.0816%  (      242)
00:07:26.977   2432.395 -  2444.998:   79.4445%  (      235)
00:07:26.977   2444.998 -  2457.601:   79.8197%  (      243)
00:07:26.977   2457.601 -  2470.204:   80.2211%  (      260)
00:07:26.977   2470.204 -  2482.807:   80.6395%  (      271)
00:07:26.977   2482.807 -  2495.410:   81.1089%  (      304)
00:07:26.977   2495.410 -  2508.013:   81.5876%  (      310)
00:07:26.977   2508.013 -  2520.616:   82.0678%  (      311)
00:07:26.977   2520.616 -  2533.219:   82.5819%  (      333)
00:07:26.977   2533.219 -  2545.822:   83.0853%  (      326)
00:07:26.977   2545.822 -  2558.425:   83.5824%  (      322)
00:07:26.977   2558.425 -  2571.028:   84.1676%  (      379)
00:07:26.977   2571.028 -  2583.632:   84.7034%  (      347)
00:07:26.977   2583.632 -  2596.235:   85.1990%  (      321)
00:07:26.977   2596.235 -  2608.838:   85.6422%  (      287)
00:07:26.977   2608.838 -  2621.441:   86.0328%  (      253)
00:07:26.977   2621.441 -  2634.044:   86.4234%  (      253)
00:07:26.977   2634.044 -  2646.647:   86.7508%  (      212)
00:07:26.977   2646.647 -  2659.250:   87.0858%  (      217)
00:07:26.977   2659.250 -  2671.853:   87.3854%  (      194)
00:07:26.977   2671.853 -  2684.456:   87.6695%  (      184)
00:07:26.977   2684.456 -  2697.059:   87.9659%  (      192)
00:07:26.977   2697.059 -  2709.662:   88.2238%  (      167)
00:07:26.977   2709.662 -  2722.265:   88.5001%  (      179)
00:07:26.977   2722.265 -  2734.868:   88.7950%  (      191)
00:07:26.977   2734.868 -  2747.472:   89.0730%  (      180)
00:07:26.977   2747.472 -  2760.075:   89.3355%  (      170)
00:07:26.977   2760.075 -  2772.678:   89.5871%  (      163)
00:07:26.977   2772.678 -  2785.281:   89.8326%  (      159)
00:07:26.977   2785.281 -  2797.884:   90.0735%  (      156)
00:07:26.977   2797.884 -  2810.487:   90.3005%  (      147)
00:07:26.977   2810.487 -  2823.090:   90.5105%  (      136)
00:07:26.977   2823.090 -  2835.693:   90.7050%  (      126)
00:07:26.977   2835.693 -  2848.296:   90.8980%  (      125)
00:07:26.977   2848.296 -  2860.899:   91.0925%  (      126)
00:07:26.977   2860.899 -  2873.502:   91.3103%  (      141)
00:07:26.977   2873.502 -  2886.105:   91.5110%  (      130)
00:07:26.977   2886.105 -  2898.709:   91.7117%  (      130)
00:07:26.977   2898.709 -  2911.312:   91.8754%  (      106)
00:07:26.977   2911.312 -  2923.915:   92.0190%  (       93)
00:07:26.977   2923.915 -  2936.518:   92.1888%  (      110)
00:07:26.977   2936.518 -  2949.121:   92.3710%  (      118)
00:07:26.977   2949.121 -  2961.724:   92.5640%  (      125)
00:07:26.977   2961.724 -  2974.327:   92.7786%  (      139)
00:07:26.977   2974.327 -  2986.930:   92.9948%  (      140)
00:07:26.977   2986.930 -  2999.533:   93.2202%  (      146)
00:07:26.977   2999.533 -  3012.136:   93.4580%  (      154)
00:07:26.977   3012.136 -  3024.739:   93.7050%  (      160)
00:07:26.977   3024.739 -  3037.342:   93.9675%  (      170)
00:07:26.977   3037.342 -  3049.945:   94.2285%  (      169)
00:07:26.977   3049.945 -  3062.549:   94.4631%  (      152)
00:07:26.977   3062.549 -  3075.152:   94.6561%  (      125)
00:07:26.977   3075.152 -  3087.755:   94.8306%  (      113)
00:07:26.977   3087.755 -  3100.358:   94.9850%  (      100)
00:07:26.977   3100.358 -  3112.961:   95.1580%  (      112)
00:07:26.977   3112.961 -  3125.564:   95.3494%  (      124)
00:07:26.977   3125.564 -  3138.167:   95.5239%  (      113)
00:07:26.977   3138.167 -  3150.770:   95.6829%  (      103)
00:07:26.977   3150.770 -  3163.373:   95.8172%  (       87)
00:07:26.977   3163.373 -  3175.976:   95.9871%  (      110)
00:07:26.977   3175.976 -  3188.579:   96.2017%  (      139)
00:07:26.977   3188.579 -  3201.182:   96.3870%  (      120)
00:07:26.977   3201.182 -  3213.786:   96.5815%  (      126)
00:07:26.977   3213.786 -  3226.389:   96.7869%  (      133)
00:07:26.977   3226.389 -  3251.595:   97.1343%  (      225)
00:07:26.977   3251.595 -  3276.801:   97.4169%  (      183)
00:07:26.977   3276.801 -  3302.007:   97.6871%  (      175)
00:07:26.977   3302.007 -  3327.213:   97.8708%  (      119)
00:07:26.977   3327.213 -  3352.419:   98.0298%  (      103)
00:07:26.977   3352.419 -  3377.626:   98.1750%  (       94)
00:07:26.977   3377.626 -  3402.832:   98.3186%  (       93)
00:07:26.977   3402.832 -  3428.038:   98.3865%  (       44)
00:07:26.977   3428.038 -  3453.244:   98.4282%  (       27)
00:07:26.977   3453.244 -  3478.450:   98.4822%  (       35)
00:07:26.977   3478.450 -  3503.656:   98.5069%  (       16)
00:07:26.977   3503.656 -  3528.863:   98.5147%  (        5)
00:07:26.977   3528.863 -  3554.069:   98.5656%  (       33)
00:07:26.977   3554.069 -  3579.275:   98.6397%  (       48)
00:07:26.977   3579.275 -  3604.481:   98.7061%  (       43)
00:07:26.977   3604.481 -  3629.687:   98.7941%  (       57)
00:07:26.977   3629.687 -  3654.893:   98.8698%  (       49)
00:07:26.977   3654.893 -  3680.100:   98.9285%  (       38)
00:07:26.977   3680.100 -  3705.306:   98.9701%  (       27)
00:07:26.977   3705.306 -  3730.512:   99.0118%  (       27)
00:07:26.977   3730.512 -  3755.718:   99.0612%  (       32)
00:07:26.977   3755.718 -  3780.924:   99.1029%  (       27)
00:07:26.977   3780.924 -  3806.130:   99.1384%  (       23)
00:07:26.977   3806.130 -  3831.336:   99.1832%  (       29)
00:07:26.977   3831.336 -  3856.543:   99.2125%  (       19)
00:07:26.977   3856.543 -  3881.749:   99.2481%  (       23)
00:07:26.977   3881.749 -  3906.955:   99.2650%  (       11)
00:07:26.977   3906.955 -  3932.161:   99.2789%  (        9)
00:07:26.977   3932.161 -  3957.367:   99.3175%  (       25)
00:07:26.977   3957.367 -  3982.573:   99.3577%  (       26)
00:07:26.977   3982.573 -  4007.780:   99.3886%  (       20)
00:07:26.977   4007.780 -  4032.986:   99.4040%  (       10)
00:07:26.977   4032.986 -  4058.192:   99.4565%  (       34)
00:07:26.977   4058.192 -  4083.398:   99.4889%  (       21)
00:07:26.977   4083.398 -  4108.604:   99.5090%  (       13)
00:07:26.977   4108.604 -  4133.810:   99.5291%  (       13)
00:07:26.977   4133.810 -  4159.017:   99.5430%  (        9)
00:07:26.977   4159.017 -  4184.223:   99.5522%  (        6)
00:07:26.977   4209.429 -  4234.635:   99.5831%  (       20)
00:07:26.977   4234.635 -  4259.841:   99.6233%  (       26)
00:07:26.977   4259.841 -  4285.047:   99.6588%  (       23)
00:07:26.977   4285.047 -  4310.254:   99.6727%  (        9)
00:07:26.977   4310.254 -  4335.460:   99.6819%  (        6)
00:07:26.977   4486.697 -  4511.903:   99.6850%  (        2)
00:07:26.977   4511.903 -  4537.109:   99.7020%  (       11)
00:07:26.977   4537.109 -  4562.315:   99.7174%  (       10)
00:07:26.977   4562.315 -  4587.521:   99.7344%  (       11)
00:07:26.977   4587.521 -  4612.727:   99.7499%  (       10)
00:07:26.977   4612.727 -  4637.934:   99.7669%  (       11)
00:07:26.977   4637.934 -  4663.140:   99.7838%  (       11)
00:07:26.977   4663.140 -  4688.346:   99.7993%  (       10)
00:07:26.977   4688.346 -  4713.552:   99.8163%  (       11)
00:07:26.978   4713.552 -  4738.758:   99.8363%  (       13)
00:07:26.978   4738.758 -  4763.964:   99.8533%  (       11)
00:07:26.978   4763.964 -  4789.171:   99.8718%  (       12)
00:07:26.978   4789.171 -  4814.377:   99.8857%  (        9)
00:07:26.978   4940.408 -  4965.614:   99.8919%  (        4)
00:07:26.978   4965.614 -  4990.820:   99.8966%  (        3)
00:07:26.978   4990.820 -  5016.026:   99.9151%  (       12)
00:07:26.978   5116.851 -  5142.057:   99.9321%  (       11)
00:07:26.978   5192.469 -  5217.675:   99.9413%  (        6)
00:07:26.978   5469.737 -  5494.943:   99.9583%  (       11)
00:07:26.978   5494.943 -  5520.149:   99.9753%  (       11)
00:07:26.978   5520.149 -  5545.355:   99.9923%  (       11)
00:07:26.978   5545.355 -  5570.562:   99.9985%  (        4)
00:07:26.978   5797.417 -  5822.623:  100.0000%  (        1)
00:07:26.978  
00:07:26.978   10:22:19 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0
00:07:27.236  EAL: TSC is not safe to use in SMP mode
00:07:27.236  EAL: TSC is not invariant
00:07:27.236  [2024-12-09 10:22:19.375195] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:28.622  Initializing NVMe Controllers
00:07:28.622  Attached to NVMe Controller at 0000:00:10.0 [1b36:0010]
00:07:28.622  Associating PCIE (0000:00:10.0) NSID 1 with lcore 0
00:07:28.622  Initialization complete. Launching workers.
00:07:28.622  ========================================================
00:07:28.622                                                                             Latency(us)
00:07:28.622  Device Information                     :       IOPS      MiB/s    Average        min        max
00:07:28.622  PCIE (0000:00:10.0) NSID 1 from core  0:   63243.00     741.13    2024.76     203.62   11888.28
00:07:28.622  ========================================================
00:07:28.622  Total                                  :   63243.00     741.13    2024.76     203.62   11888.28
00:07:28.622  
00:07:28.622  Summary latency data for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:28.622  =================================================================================
00:07:28.622    1.00000% :  1235.102us
00:07:28.622   10.00000% :  1455.656us
00:07:28.622   25.00000% :  1625.797us
00:07:28.622   50.00000% :  1903.065us
00:07:28.622   75.00000% :  2318.967us
00:07:28.622   90.00000% :  2697.059us
00:07:28.622   95.00000% :  2974.327us
00:07:28.622   98.00000% :  3402.832us
00:07:28.622   99.00000% :  3932.161us
00:07:28.622   99.50000% :  4915.201us
00:07:28.622   99.90000% :  8519.682us
00:07:28.622   99.99000% : 11191.535us
00:07:28.622   99.99900% : 11897.308us
00:07:28.622   99.99990% : 11897.308us
00:07:28.622   99.99999% : 11897.308us
00:07:28.622  
00:07:28.622  Latency histogram for PCIE (0000:00:10.0) NSID 1                  from core 0:
00:07:28.622  ==============================================================================
00:07:28.622         Range in us     Cumulative    IO count
00:07:28.622    203.225 -   204.800:    0.0047%  (        3)
00:07:28.622    204.800 -   206.375:    0.0079%  (        2)
00:07:28.622    206.375 -   207.951:    0.0111%  (        2)
00:07:28.622    218.979 -   220.554:    0.0126%  (        1)
00:07:28.622    220.554 -   222.129:    0.0142%  (        1)
00:07:28.622    231.582 -   233.157:    0.0158%  (        1)
00:07:28.622    259.939 -   261.514:    0.0174%  (        1)
00:07:28.622    261.514 -   263.089:    0.0190%  (        1)
00:07:28.622    263.089 -   264.665:    0.0237%  (        3)
00:07:28.622    264.665 -   266.240:    0.0285%  (        3)
00:07:28.622    267.815 -   269.391:    0.0316%  (        2)
00:07:28.622    269.391 -   270.966:    0.0332%  (        1)
00:07:28.622    289.871 -   291.446:    0.0364%  (        2)
00:07:28.622    291.446 -   293.022:    0.0379%  (        1)
00:07:28.622    293.022 -   294.597:    0.0395%  (        1)
00:07:28.622    294.597 -   296.172:    0.0411%  (        1)
00:07:28.622    299.323 -   300.899:    0.0459%  (        3)
00:07:28.622    315.077 -   316.652:    0.0474%  (        1)
00:07:28.622    324.529 -   326.105:    0.0506%  (        2)
00:07:28.622    326.105 -   327.680:    0.0522%  (        1)
00:07:28.622    327.680 -   329.255:    0.0601%  (        5)
00:07:28.622    340.283 -   341.859:    0.0617%  (        1)
00:07:28.622    346.585 -   348.160:    0.0648%  (        2)
00:07:28.622    348.160 -   349.735:    0.0664%  (        1)
00:07:28.622    351.311 -   352.886:    0.0696%  (        2)
00:07:28.622    352.886 -   354.462:    0.0743%  (        3)
00:07:28.622    359.188 -   360.763:    0.0775%  (        2)
00:07:28.622    362.339 -   363.914:    0.0791%  (        1)
00:07:28.622    363.914 -   365.489:    0.0838%  (        3)
00:07:28.622    365.489 -   367.065:    0.0870%  (        2)
00:07:28.622    367.065 -   368.640:    0.0901%  (        2)
00:07:28.622    368.640 -   370.215:    0.0965%  (        4)
00:07:28.622    370.215 -   371.791:    0.1059%  (        6)
00:07:28.622    371.791 -   373.366:    0.1091%  (        2)
00:07:28.622    373.366 -   374.942:    0.1107%  (        1)
00:07:28.622    376.517 -   378.092:    0.1123%  (        1)
00:07:28.622    378.092 -   379.668:    0.1154%  (        2)
00:07:28.622    379.668 -   381.243:    0.1170%  (        1)
00:07:28.622    403.299 -   406.449:    0.1233%  (        4)
00:07:28.622    406.449 -   409.600:    0.1265%  (        2)
00:07:28.622    409.600 -   412.751:    0.1328%  (        4)
00:07:28.622    412.751 -   415.902:    0.1423%  (        6)
00:07:28.622    415.902 -   419.052:    0.1597%  (       11)
00:07:28.622    419.052 -   422.203:    0.1660%  (        4)
00:07:28.622    422.203 -   425.354:    0.1724%  (        4)
00:07:28.622    425.354 -   428.505:    0.1739%  (        1)
00:07:28.622    428.505 -   431.656:    0.1755%  (        1)
00:07:28.622    431.656 -   434.806:    0.1945%  (       12)
00:07:28.622    434.806 -   437.957:    0.2040%  (        6)
00:07:28.622    441.108 -   444.259:    0.2056%  (        1)
00:07:28.622    453.711 -   456.862:    0.2071%  (        1)
00:07:28.622    456.862 -   460.012:    0.2103%  (        2)
00:07:28.622    469.465 -   472.616:    0.2119%  (        1)
00:07:28.622    475.766 -   478.917:    0.2135%  (        1)
00:07:28.622    482.068 -   485.219:    0.2150%  (        1)
00:07:28.622    485.219 -   488.369:    0.2166%  (        1)
00:07:28.622    488.369 -   491.520:    0.2245%  (        5)
00:07:28.622    491.520 -   494.671:    0.2277%  (        2)
00:07:28.622    494.671 -   497.822:    0.2293%  (        1)
00:07:28.622    519.877 -   523.028:    0.2324%  (        2)
00:07:28.622    523.028 -   526.179:    0.2372%  (        3)
00:07:28.622    526.179 -   529.329:    0.2403%  (        2)
00:07:28.622    532.480 -   535.631:    0.2546%  (        9)
00:07:28.622    541.932 -   545.083:    0.2577%  (        2)
00:07:28.622    545.083 -   548.234:    0.2593%  (        1)
00:07:28.622    554.536 -   557.686:    0.2625%  (        2)
00:07:28.622    563.988 -   567.139:    0.2641%  (        1)
00:07:28.622    567.139 -   570.289:    0.2751%  (        7)
00:07:28.622    570.289 -   573.440:    0.2894%  (        9)
00:07:28.622    573.440 -   576.591:    0.2909%  (        1)
00:07:28.622    623.852 -   627.003:    0.2925%  (        1)
00:07:28.622    645.908 -   649.059:    0.2941%  (        1)
00:07:28.622    649.059 -   652.209:    0.3004%  (        4)
00:07:28.622    652.209 -   655.360:    0.3147%  (        9)
00:07:28.622    655.360 -   658.511:    0.3178%  (        2)
00:07:28.622    658.511 -   661.662:    0.3194%  (        1)
00:07:28.622    664.812 -   667.963:    0.3241%  (        3)
00:07:28.622    667.963 -   671.114:    0.3257%  (        1)
00:07:28.622    671.114 -   674.265:    0.3352%  (        6)
00:07:28.622    674.265 -   677.416:    0.3526%  (       11)
00:07:28.622    677.416 -   680.566:    0.3574%  (        3)
00:07:28.622    680.566 -   683.717:    0.3605%  (        2)
00:07:28.622    683.717 -   686.868:    0.3621%  (        1)
00:07:28.622    686.868 -   690.019:    0.3653%  (        2)
00:07:28.622    690.019 -   693.169:    0.3684%  (        2)
00:07:28.622    693.169 -   696.320:    0.3716%  (        2)
00:07:28.622    696.320 -   699.471:    0.3795%  (        5)
00:07:28.622    699.471 -   702.622:    0.3890%  (        6)
00:07:28.622    702.622 -   705.773:    0.3953%  (        4)
00:07:28.622    705.773 -   708.923:    0.4016%  (        4)
00:07:28.622    708.923 -   712.074:    0.4064%  (        3)
00:07:28.622    712.074 -   715.225:    0.4143%  (        5)
00:07:28.622    715.225 -   718.376:    0.4222%  (        5)
00:07:28.622    718.376 -   721.526:    0.4348%  (        8)
00:07:28.622    721.526 -   724.677:    0.4506%  (       10)
00:07:28.622    724.677 -   727.828:    0.4696%  (       12)
00:07:28.622    727.828 -   730.979:    0.4791%  (        6)
00:07:28.622    730.979 -   734.129:    0.4823%  (        2)
00:07:28.622    734.129 -   737.280:    0.4854%  (        2)
00:07:28.622    737.280 -   740.431:    0.4886%  (        2)
00:07:28.622    740.431 -   743.582:    0.4902%  (        1)
00:07:28.622    775.089 -   778.240:    0.4918%  (        1)
00:07:28.622    778.240 -   781.391:    0.4933%  (        1)
00:07:28.623    781.391 -   784.542:    0.4949%  (        1)
00:07:28.623    784.542 -   787.693:    0.4965%  (        1)
00:07:28.623    787.693 -   790.843:    0.4981%  (        1)
00:07:28.623    790.843 -   793.994:    0.5060%  (        5)
00:07:28.623    793.994 -   797.145:    0.5123%  (        4)
00:07:28.623    797.145 -   800.296:    0.5250%  (        8)
00:07:28.623    800.296 -   803.446:    0.5424%  (       11)
00:07:28.623    803.446 -   806.597:    0.5518%  (        6)
00:07:28.623    806.597 -   812.899:    0.5677%  (       10)
00:07:28.623    812.899 -   819.200:    0.5787%  (        7)
00:07:28.623    819.200 -   825.502:    0.5882%  (        6)
00:07:28.623    825.502 -   831.803:    0.5898%  (        1)
00:07:28.623    838.105 -   844.406:    0.5914%  (        1)
00:07:28.623    863.311 -   869.613:    0.5930%  (        1)
00:07:28.623    869.613 -   875.914:    0.5961%  (        2)
00:07:28.623    875.914 -   882.216:    0.6040%  (        5)
00:07:28.623    882.216 -   888.517:    0.6103%  (        4)
00:07:28.623    888.517 -   894.819:    0.6198%  (        6)
00:07:28.623    894.819 -   901.120:    0.6388%  (       12)
00:07:28.623    901.120 -   907.422:    0.6467%  (        5)
00:07:28.623    907.422 -   913.723:    0.6530%  (        4)
00:07:28.623    913.723 -   920.025:    0.6546%  (        1)
00:07:28.623    938.929 -   945.231:    0.6562%  (        1)
00:07:28.623    945.231 -   951.533:    0.6673%  (        7)
00:07:28.623    951.533 -   957.834:    0.6815%  (        9)
00:07:28.623    957.834 -   964.136:    0.6894%  (        5)
00:07:28.623    964.136 -   970.437:    0.6941%  (        3)
00:07:28.623    970.437 -   976.739:    0.7005%  (        4)
00:07:28.623    976.739 -   983.040:    0.7036%  (        2)
00:07:28.623   1058.659 -  1064.960:    0.7052%  (        1)
00:07:28.623   1064.960 -  1071.262:    0.7147%  (        6)
00:07:28.623   1140.579 -  1146.880:    0.7163%  (        1)
00:07:28.623   1146.880 -  1153.182:    0.7242%  (        5)
00:07:28.623   1153.182 -  1159.483:    0.7289%  (        3)
00:07:28.623   1159.483 -  1165.785:    0.7384%  (        6)
00:07:28.623   1165.785 -  1172.086:    0.7527%  (        9)
00:07:28.623   1172.086 -  1178.388:    0.7716%  (       12)
00:07:28.623   1178.388 -  1184.690:    0.7906%  (       12)
00:07:28.623   1184.690 -  1190.991:    0.8048%  (        9)
00:07:28.623   1190.991 -  1197.293:    0.8286%  (       15)
00:07:28.623   1197.293 -  1203.594:    0.8491%  (       13)
00:07:28.623   1203.594 -  1209.896:    0.8760%  (       17)
00:07:28.623   1209.896 -  1216.197:    0.8997%  (       15)
00:07:28.623   1216.197 -  1222.499:    0.9471%  (       30)
00:07:28.623   1222.499 -  1228.800:    0.9803%  (       21)
00:07:28.623   1228.800 -  1235.102:    1.0230%  (       27)
00:07:28.623   1235.102 -  1241.403:    1.0736%  (       32)
00:07:28.623   1241.403 -  1247.705:    1.1385%  (       41)
00:07:28.623   1247.705 -  1254.007:    1.2096%  (       45)
00:07:28.623   1254.007 -  1260.308:    1.3029%  (       59)
00:07:28.623   1260.308 -  1266.610:    1.3962%  (       59)
00:07:28.623   1266.610 -  1272.911:    1.5148%  (       75)
00:07:28.623   1272.911 -  1279.213:    1.6302%  (       73)
00:07:28.623   1279.213 -  1285.514:    1.7441%  (       72)
00:07:28.623   1285.514 -  1291.816:    1.8674%  (       78)
00:07:28.623   1291.816 -  1298.117:    1.9765%  (       69)
00:07:28.623   1298.117 -  1304.419:    2.0998%  (       78)
00:07:28.623   1304.419 -  1310.720:    2.2437%  (       91)
00:07:28.623   1310.720 -  1317.022:    2.3987%  (       98)
00:07:28.623   1317.022 -  1323.323:    2.5742%  (      111)
00:07:28.623   1323.323 -  1329.625:    2.7734%  (      126)
00:07:28.623   1329.625 -  1335.927:    2.9758%  (      128)
00:07:28.623   1335.927 -  1342.228:    3.1766%  (      127)
00:07:28.623   1342.228 -  1348.530:    3.3980%  (      140)
00:07:28.623   1348.530 -  1354.831:    3.6336%  (      149)
00:07:28.623   1354.831 -  1361.133:    3.8961%  (      166)
00:07:28.623   1361.133 -  1367.434:    4.1886%  (      185)
00:07:28.623   1367.434 -  1373.736:    4.5128%  (      205)
00:07:28.623   1373.736 -  1380.037:    4.8511%  (      214)
00:07:28.623   1380.037 -  1386.339:    5.2069%  (      225)
00:07:28.623   1386.339 -  1392.640:    5.5658%  (      227)
00:07:28.623   1392.640 -  1398.942:    5.9453%  (      240)
00:07:28.623   1398.942 -  1405.243:    6.3501%  (      256)
00:07:28.623   1405.243 -  1411.545:    6.8308%  (      304)
00:07:28.623   1411.545 -  1417.847:    7.3004%  (      297)
00:07:28.623   1417.847 -  1424.148:    7.7906%  (      310)
00:07:28.623   1424.148 -  1430.450:    8.2902%  (      316)
00:07:28.623   1430.450 -  1436.751:    8.7520%  (      292)
00:07:28.623   1436.751 -  1443.053:    9.2405%  (      309)
00:07:28.623   1443.053 -  1449.354:    9.7212%  (      304)
00:07:28.623   1449.354 -  1455.656:   10.2051%  (      306)
00:07:28.623   1455.656 -  1461.957:   10.7237%  (      328)
00:07:28.623   1461.957 -  1468.259:   11.2376%  (      325)
00:07:28.623   1468.259 -  1474.560:   11.7736%  (      339)
00:07:28.623   1474.560 -  1480.862:   12.2828%  (      322)
00:07:28.623   1480.862 -  1487.164:   12.8172%  (      338)
00:07:28.623   1487.164 -  1493.465:   13.3216%  (      319)
00:07:28.623   1493.465 -  1499.767:   13.8181%  (      314)
00:07:28.623   1499.767 -  1506.068:   14.3842%  (      358)
00:07:28.623   1506.068 -  1512.370:   14.9376%  (      350)
00:07:28.623   1512.370 -  1518.671:   15.4831%  (      345)
00:07:28.623   1518.671 -  1524.973:   16.0318%  (      347)
00:07:28.623   1524.973 -  1531.274:   16.5900%  (      353)
00:07:28.623   1531.274 -  1537.576:   17.0738%  (      306)
00:07:28.623   1537.576 -  1543.877:   17.6099%  (      339)
00:07:28.623   1543.877 -  1550.179:   18.1475%  (      340)
00:07:28.623   1550.179 -  1556.480:   18.7214%  (      363)
00:07:28.623   1556.480 -  1562.782:   19.2717%  (      348)
00:07:28.623   1562.782 -  1569.084:   19.8884%  (      390)
00:07:28.623   1569.084 -  1575.385:   20.5208%  (      400)
00:07:28.623   1575.385 -  1581.687:   21.1154%  (      376)
00:07:28.623   1581.687 -  1587.988:   21.7289%  (      388)
00:07:28.623   1587.988 -  1594.290:   22.3218%  (      375)
00:07:28.623   1594.290 -  1600.591:   22.9085%  (      371)
00:07:28.623   1600.591 -  1606.893:   23.5062%  (      378)
00:07:28.623   1606.893 -  1613.194:   24.1165%  (      386)
00:07:28.623   1613.194 -  1625.797:   25.3799%  (      799)
00:07:28.623   1625.797 -  1638.400:   26.6954%  (      832)
00:07:28.623   1638.400 -  1651.004:   28.0078%  (      830)
00:07:28.623   1651.004 -  1663.607:   29.3424%  (      844)
00:07:28.623   1663.607 -  1676.210:   30.7417%  (      885)
00:07:28.623   1676.210 -  1688.813:   32.0273%  (      813)
00:07:28.623   1688.813 -  1701.416:   33.3318%  (      825)
00:07:28.623   1701.416 -  1714.019:   34.5920%  (      797)
00:07:28.623   1714.019 -  1726.622:   35.8743%  (      811)
00:07:28.623   1726.622 -  1739.225:   37.1820%  (      827)
00:07:28.623   1739.225 -  1751.828:   38.4833%  (      823)
00:07:28.623   1751.828 -  1764.431:   39.7056%  (      773)
00:07:28.623   1764.431 -  1777.034:   40.8346%  (      714)
00:07:28.623   1777.034 -  1789.637:   41.9114%  (      681)
00:07:28.623   1789.637 -  1802.241:   42.9249%  (      641)
00:07:28.623   1802.241 -  1814.844:   43.8879%  (      609)
00:07:28.623   1814.844 -  1827.447:   44.8477%  (      607)
00:07:28.623   1827.447 -  1840.050:   45.7284%  (      557)
00:07:28.623   1840.050 -  1852.653:   46.5048%  (      491)
00:07:28.623   1852.653 -  1865.256:   47.3380%  (      527)
00:07:28.623   1865.256 -  1877.859:   48.2710%  (      590)
00:07:28.623   1877.859 -  1890.462:   49.2529%  (      621)
00:07:28.623   1890.462 -  1903.065:   50.3550%  (      697)
00:07:28.623   1903.065 -  1915.668:   51.3195%  (      610)
00:07:28.623   1915.668 -  1928.271:   52.1939%  (      553)
00:07:28.623   1928.271 -  1940.874:   53.1300%  (      592)
00:07:28.623   1940.874 -  1953.477:   53.9870%  (      542)
00:07:28.623   1953.477 -  1966.081:   54.8061%  (      518)
00:07:28.623   1966.081 -  1978.684:   55.6014%  (      503)
00:07:28.623   1978.684 -  1991.287:   56.4600%  (      543)
00:07:28.623   1991.287 -  2003.890:   57.2063%  (      472)
00:07:28.623   2003.890 -  2016.493:   57.9147%  (      448)
00:07:28.623   2016.493 -  2029.096:   58.6073%  (      438)
00:07:28.623   2029.096 -  2041.699:   59.2461%  (      404)
00:07:28.623   2041.699 -  2054.302:   59.8612%  (      389)
00:07:28.623   2054.302 -  2066.905:   60.5126%  (      412)
00:07:28.623   2066.905 -  2079.508:   61.1182%  (      383)
00:07:28.623   2079.508 -  2092.111:   61.6875%  (      360)
00:07:28.623   2092.111 -  2104.714:   62.3579%  (      424)
00:07:28.623   2104.714 -  2117.318:   63.0694%  (      450)
00:07:28.623   2117.318 -  2129.921:   63.7383%  (      423)
00:07:28.623   2129.921 -  2142.524:   64.5479%  (      512)
00:07:28.623   2142.524 -  2155.127:   65.3306%  (      495)
00:07:28.623   2155.127 -  2167.730:   66.1006%  (      487)
00:07:28.623   2167.730 -  2180.333:   66.8532%  (      476)
00:07:28.623   2180.333 -  2192.936:   67.5917%  (      467)
00:07:28.623   2192.936 -  2205.539:   68.3491%  (      479)
00:07:28.623   2205.539 -  2218.142:   69.1191%  (      487)
00:07:28.623   2218.142 -  2230.745:   69.8433%  (      458)
00:07:28.623   2230.745 -  2243.348:   70.6260%  (      495)
00:07:28.623   2243.348 -  2255.951:   71.4198%  (      502)
00:07:28.623   2255.951 -  2268.554:   72.2325%  (      514)
00:07:28.623   2268.554 -  2281.158:   73.0611%  (      524)
00:07:28.623   2281.158 -  2293.761:   73.8880%  (      523)
00:07:28.623   2293.761 -  2306.364:   74.5885%  (      443)
00:07:28.623   2306.364 -  2318.967:   75.2178%  (      398)
00:07:28.623   2318.967 -  2331.570:   75.8788%  (      418)
00:07:28.623   2331.570 -  2344.173:   76.5318%  (      413)
00:07:28.623   2344.173 -  2356.776:   77.1469%  (      389)
00:07:28.623   2356.776 -  2369.379:   77.7272%  (      367)
00:07:28.623   2369.379 -  2381.982:   78.3217%  (      376)
00:07:28.623   2381.982 -  2394.585:   78.8293%  (      321)
00:07:28.623   2394.585 -  2407.188:   79.3479%  (      328)
00:07:28.623   2407.188 -  2419.791:   79.8824%  (      338)
00:07:28.623   2419.791 -  2432.395:   80.4753%  (      375)
00:07:28.623   2432.395 -  2444.998:   81.0524%  (      365)
00:07:28.623   2444.998 -  2457.601:   81.7134%  (      418)
00:07:28.623   2457.601 -  2470.204:   82.3569%  (      407)
00:07:28.623   2470.204 -  2482.807:   82.9894%  (      400)
00:07:28.623   2482.807 -  2495.410:   83.6330%  (      407)
00:07:28.623   2495.410 -  2508.013:   84.2844%  (      412)
00:07:28.623   2508.013 -  2520.616:   84.8489%  (      357)
00:07:28.623   2520.616 -  2533.219:   85.3470%  (      315)
00:07:28.623   2533.219 -  2545.822:   85.7771%  (      272)
00:07:28.623   2545.822 -  2558.425:   86.1787%  (      254)
00:07:28.623   2558.425 -  2571.028:   86.5313%  (      223)
00:07:28.623   2571.028 -  2583.632:   86.8697%  (      214)
00:07:28.623   2583.632 -  2596.235:   87.2176%  (      220)
00:07:28.623   2596.235 -  2608.838:   87.5923%  (      237)
00:07:28.623   2608.838 -  2621.441:   87.9892%  (      251)
00:07:28.623   2621.441 -  2634.044:   88.3829%  (      249)
00:07:28.623   2634.044 -  2646.647:   88.7719%  (      246)
00:07:28.623   2646.647 -  2659.250:   89.1703%  (      252)
00:07:28.623   2659.250 -  2671.853:   89.5483%  (      239)
00:07:28.623   2671.853 -  2684.456:   89.9198%  (      235)
00:07:28.623   2684.456 -  2697.059:   90.2756%  (      225)
00:07:28.623   2697.059 -  2709.662:   90.6029%  (      207)
00:07:28.623   2709.662 -  2722.265:   90.9223%  (      202)
00:07:28.623   2722.265 -  2734.868:   91.2449%  (      204)
00:07:28.624   2734.868 -  2747.472:   91.5184%  (      173)
00:07:28.624   2747.472 -  2760.075:   91.8252%  (      194)
00:07:28.624   2760.075 -  2772.678:   92.1145%  (      183)
00:07:28.624   2772.678 -  2785.281:   92.3391%  (      142)
00:07:28.624   2785.281 -  2797.884:   92.5668%  (      144)
00:07:28.624   2797.884 -  2810.487:   92.7960%  (      145)
00:07:28.624   2810.487 -  2823.090:   93.0048%  (      132)
00:07:28.624   2823.090 -  2835.693:   93.2040%  (      126)
00:07:28.624   2835.693 -  2848.296:   93.3985%  (      123)
00:07:28.624   2848.296 -  2860.899:   93.5866%  (      119)
00:07:28.624   2860.899 -  2873.502:   93.7590%  (      109)
00:07:28.624   2873.502 -  2886.105:   93.9124%  (       97)
00:07:28.624   2886.105 -  2898.709:   94.1132%  (      127)
00:07:28.624   2898.709 -  2911.312:   94.2729%  (      101)
00:07:28.624   2911.312 -  2923.915:   94.4610%  (      119)
00:07:28.624   2923.915 -  2936.518:   94.6508%  (      120)
00:07:28.624   2936.518 -  2949.121:   94.8200%  (      107)
00:07:28.624   2949.121 -  2961.724:   94.9670%  (       93)
00:07:28.624   2961.724 -  2974.327:   95.1252%  (      100)
00:07:28.624   2974.327 -  2986.930:   95.3291%  (      129)
00:07:28.624   2986.930 -  2999.533:   95.5236%  (      123)
00:07:28.624   2999.533 -  3012.136:   95.7418%  (      138)
00:07:28.624   3012.136 -  3024.739:   95.9395%  (      125)
00:07:28.624   3024.739 -  3037.342:   96.0881%  (       94)
00:07:28.624   3037.342 -  3049.945:   96.2020%  (       72)
00:07:28.624   3049.945 -  3062.549:   96.2889%  (       55)
00:07:28.624   3062.549 -  3075.152:   96.3648%  (       48)
00:07:28.624   3075.152 -  3087.755:   96.4423%  (       49)
00:07:28.624   3087.755 -  3100.358:   96.4897%  (       30)
00:07:28.624   3100.358 -  3112.961:   96.5451%  (       35)
00:07:28.624   3112.961 -  3125.564:   96.6067%  (       39)
00:07:28.624   3125.564 -  3138.167:   96.6621%  (       35)
00:07:28.624   3138.167 -  3150.770:   96.7427%  (       51)
00:07:28.624   3150.770 -  3163.373:   96.8313%  (       56)
00:07:28.624   3163.373 -  3175.976:   96.9325%  (       64)
00:07:28.624   3175.976 -  3188.579:   97.0131%  (       51)
00:07:28.624   3188.579 -  3201.182:   97.0858%  (       46)
00:07:28.624   3201.182 -  3213.786:   97.1776%  (       58)
00:07:28.624   3213.786 -  3226.389:   97.2946%  (       74)
00:07:28.624   3226.389 -  3251.595:   97.4732%  (      113)
00:07:28.624   3251.595 -  3276.801:   97.6076%  (       85)
00:07:28.624   3276.801 -  3302.007:   97.6914%  (       53)
00:07:28.624   3302.007 -  3327.213:   97.7800%  (       56)
00:07:28.624   3327.213 -  3352.419:   97.8432%  (       40)
00:07:28.624   3352.419 -  3377.626:   97.9413%  (       62)
00:07:28.624   3377.626 -  3402.832:   98.0346%  (       59)
00:07:28.624   3402.832 -  3428.038:   98.1294%  (       60)
00:07:28.624   3428.038 -  3453.244:   98.2275%  (       62)
00:07:28.624   3453.244 -  3478.450:   98.3208%  (       59)
00:07:28.624   3478.450 -  3503.656:   98.4014%  (       51)
00:07:28.624   3503.656 -  3528.863:   98.4868%  (       54)
00:07:28.624   3528.863 -  3554.069:   98.5674%  (       51)
00:07:28.624   3554.069 -  3579.275:   98.6370%  (       44)
00:07:28.624   3579.275 -  3604.481:   98.6797%  (       27)
00:07:28.624   3604.481 -  3629.687:   98.7034%  (       15)
00:07:28.624   3629.687 -  3654.893:   98.7319%  (       18)
00:07:28.624   3654.893 -  3680.100:   98.7572%  (       16)
00:07:28.624   3680.100 -  3705.306:   98.7967%  (       25)
00:07:28.624   3705.306 -  3730.512:   98.8267%  (       19)
00:07:28.624   3730.512 -  3755.718:   98.8520%  (       16)
00:07:28.624   3755.718 -  3780.924:   98.8600%  (        5)
00:07:28.624   3780.924 -  3806.130:   98.8726%  (        8)
00:07:28.624   3806.130 -  3831.336:   98.8789%  (        4)
00:07:28.624   3831.336 -  3856.543:   98.8805%  (        1)
00:07:28.624   3856.543 -  3881.749:   98.9074%  (       17)
00:07:28.624   3881.749 -  3906.955:   98.9627%  (       35)
00:07:28.624   3906.955 -  3932.161:   99.0007%  (       24)
00:07:28.624   3932.161 -  3957.367:   99.0813%  (       51)
00:07:28.624   3957.367 -  3982.573:   99.1762%  (       60)
00:07:28.624   3982.573 -  4007.780:   99.2252%  (       31)
00:07:28.624   4007.780 -  4032.986:   99.2458%  (       13)
00:07:28.624   4032.986 -  4058.192:   99.2695%  (       15)
00:07:28.624   4058.192 -  4083.398:   99.2742%  (        3)
00:07:28.624   4083.398 -  4108.604:   99.2837%  (        6)
00:07:28.624   4108.604 -  4133.810:   99.2916%  (        5)
00:07:28.624   4133.810 -  4159.017:   99.2995%  (        5)
00:07:28.624   4159.017 -  4184.223:   99.3122%  (        8)
00:07:28.624   4184.223 -  4209.429:   99.3153%  (        2)
00:07:28.624   4209.429 -  4234.635:   99.3312%  (       10)
00:07:28.624   4259.841 -  4285.047:   99.3375%  (        4)
00:07:28.624   4310.254 -  4335.460:   99.3406%  (        2)
00:07:28.624   4335.460 -  4360.666:   99.3565%  (       10)
00:07:28.624   4360.666 -  4385.872:   99.3659%  (        6)
00:07:28.624   4385.872 -  4411.078:   99.3849%  (       12)
00:07:28.624   4411.078 -  4436.284:   99.3928%  (        5)
00:07:28.624   4436.284 -  4461.491:   99.4165%  (       15)
00:07:28.624   4461.491 -  4486.697:   99.4244%  (        5)
00:07:28.624   4486.697 -  4511.903:   99.4260%  (        1)
00:07:28.624   4511.903 -  4537.109:   99.4355%  (        6)
00:07:28.624   4537.109 -  4562.315:   99.4371%  (        1)
00:07:28.624   4562.315 -  4587.521:   99.4418%  (        3)
00:07:28.624   4612.727 -  4637.934:   99.4497%  (        5)
00:07:28.624   4688.346 -  4713.552:   99.4545%  (        3)
00:07:28.624   4713.552 -  4738.758:   99.4640%  (        6)
00:07:28.624   4738.758 -  4763.964:   99.4719%  (        5)
00:07:28.624   4763.964 -  4789.171:   99.4735%  (        1)
00:07:28.624   4789.171 -  4814.377:   99.4750%  (        1)
00:07:28.624   4864.789 -  4889.995:   99.4814%  (        4)
00:07:28.624   4889.995 -  4915.201:   99.5067%  (       16)
00:07:28.624   4940.408 -  4965.614:   99.5114%  (        3)
00:07:28.624   4965.614 -  4990.820:   99.5209%  (        6)
00:07:28.624   4990.820 -  5016.026:   99.5241%  (        2)
00:07:28.624   5016.026 -  5041.232:   99.5335%  (        6)
00:07:28.624   5041.232 -  5066.438:   99.5399%  (        4)
00:07:28.624   5066.438 -  5091.645:   99.5462%  (        4)
00:07:28.624   5091.645 -  5116.851:   99.5525%  (        4)
00:07:28.624   5116.851 -  5142.057:   99.5826%  (       19)
00:07:29.197   5142.057 -  5167.263:   99.5905%  (        5)
00:07:29.197   5167.263 -  5192.469:   99.5920%  (        1)
00:07:29.197   5217.675 -  5242.881:   99.5952%  (        2)
00:07:29.197   5242.881 -  5268.088:   99.6094%  (        9)
00:07:29.197   5268.088 -  5293.294:   99.6284%  (       12)
00:07:29.197   5293.294 -  5318.500:   99.6300%  (        1)
00:07:29.197   5368.912 -  5394.118:   99.6332%  (        2)
00:07:29.197   5394.118 -  5419.325:   99.6347%  (        1)
00:07:29.197   5419.325 -  5444.531:   99.6426%  (        5)
00:07:29.197   5494.943 -  5520.149:   99.6442%  (        1)
00:07:29.197   5520.149 -  5545.355:   99.6474%  (        2)
00:07:29.197   5545.355 -  5570.562:   99.6506%  (        2)
00:07:29.197   5570.562 -  5595.768:   99.6664%  (       10)
00:07:29.197   5620.974 -  5646.180:   99.6679%  (        1)
00:07:29.197   5646.180 -  5671.386:   99.6790%  (        7)
00:07:29.197   5696.592 -  5721.799:   99.6838%  (        3)
00:07:29.197   5721.799 -  5747.005:   99.6853%  (        1)
00:07:29.197   5747.005 -  5772.211:   99.6869%  (        1)
00:07:29.197   5822.623 -  5847.829:   99.6885%  (        1)
00:07:29.197   5847.829 -  5873.036:   99.6901%  (        1)
00:07:29.197   5898.242 -  5923.448:   99.6917%  (        1)
00:07:29.197   6024.272 -  6049.479:   99.7012%  (        6)
00:07:29.197   6125.097 -  6150.303:   99.7043%  (        2)
00:07:29.197   6150.303 -  6175.509:   99.7091%  (        3)
00:07:29.197   6251.128 -  6276.334:   99.7122%  (        2)
00:07:29.197   6301.540 -  6326.746:   99.7249%  (        8)
00:07:29.197   6402.365 -  6427.571:   99.7280%  (        2)
00:07:29.197   6427.571 -  6452.777:   99.7296%  (        1)
00:07:29.197   6452.777 -  6503.190:   99.7344%  (        3)
00:07:29.197   6503.190 -  6553.602:   99.7423%  (        5)
00:07:29.197   6553.602 -  6604.014:   99.7518%  (        6)
00:07:29.197   6604.014 -  6654.427:   99.7612%  (        6)
00:07:29.197   6654.427 -  6704.839:   99.7628%  (        1)
00:07:29.197   6704.839 -  6755.251:   99.7660%  (        2)
00:07:29.197   7007.313 -  7057.725:   99.7818%  (       10)
00:07:29.197   7057.725 -  7108.137:   99.7834%  (        1)
00:07:29.197   7108.137 -  7158.550:   99.7850%  (        1)
00:07:29.197   7158.550 -  7208.962:   99.7913%  (        4)
00:07:29.197   7208.962 -  7259.374:   99.7976%  (        4)
00:07:29.197   7309.787 -  7360.199:   99.7992%  (        1)
00:07:29.197   7410.611 -  7461.024:   99.8023%  (        2)
00:07:29.197   7461.024 -  7511.436:   99.8071%  (        3)
00:07:29.197   7511.436 -  7561.848:   99.8103%  (        2)
00:07:29.197   7561.848 -  7612.261:   99.8166%  (        4)
00:07:29.197   7612.261 -  7662.673:   99.8261%  (        6)
00:07:29.197   7713.085 -  7763.498:   99.8276%  (        1)
00:07:29.197   7763.498 -  7813.910:   99.8340%  (        4)
00:07:29.197   7813.910 -  7864.322:   99.8387%  (        3)
00:07:29.197   7965.147 -  8015.559:   99.8514%  (        8)
00:07:29.197   8116.384 -  8166.796:   99.8529%  (        1)
00:07:29.197   8166.796 -  8217.208:   99.8577%  (        3)
00:07:29.197   8368.445 -  8418.858:   99.8656%  (        5)
00:07:29.197   8418.858 -  8469.270:   99.8846%  (       12)
00:07:29.197   8469.270 -  8519.682:   99.9035%  (       12)
00:07:29.197   8771.744 -  8822.156:   99.9067%  (        2)
00:07:29.197   8822.156 -  8872.569:   99.9083%  (        1)
00:07:29.197   8872.569 -  8922.981:   99.9099%  (        1)
00:07:29.197   9023.806 -  9074.218:   99.9115%  (        1)
00:07:29.197   9175.043 -  9225.455:   99.9178%  (        4)
00:07:29.197   9275.867 -  9326.280:   99.9225%  (        3)
00:07:29.197   9326.280 -  9376.692:   99.9257%  (        2)
00:07:29.197   9679.166 -  9729.578:   99.9288%  (        2)
00:07:29.197   9779.990 -  9830.403:   99.9320%  (        2)
00:07:29.197   9931.227 -  9981.640:   99.9336%  (        1)
00:07:29.197  10233.701 - 10284.114:   99.9368%  (        2)
00:07:29.197  10435.351 - 10485.763:   99.9415%  (        3)
00:07:29.197  10485.763 - 10536.175:   99.9478%  (        4)
00:07:29.197  10586.588 - 10637.000:   99.9494%  (        1)
00:07:29.197  10637.000 - 10687.412:   99.9526%  (        2)
00:07:29.197  10687.412 - 10737.825:   99.9541%  (        1)
00:07:29.197  10838.649 - 10889.062:   99.9573%  (        2)
00:07:29.197  10889.062 - 10939.474:   99.9763%  (       12)
00:07:29.197  10939.474 - 10989.886:   99.9842%  (        5)
00:07:29.197  11090.711 - 11141.123:   99.9889%  (        3)
00:07:29.197  11141.123 - 11191.535:   99.9968%  (        5)
00:07:29.197  11342.772 - 11393.185:   99.9984%  (        1)
00:07:29.197  11846.896 - 11897.308:  100.0000%  (        1)
00:07:29.197  
00:07:29.197   10:22:21 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']'
00:07:29.197  
00:07:29.197  real	0m3.481s
00:07:29.197  user	0m2.824s
00:07:29.197  sys	0m0.650s
00:07:29.197   10:22:21 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.197   10:22:21 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x
00:07:29.197  ************************************
00:07:29.197  END TEST nvme_perf
00:07:29.197  ************************************
00:07:29.197   10:22:21 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:29.197   10:22:21 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']'
00:07:29.197   10:22:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.197   10:22:21 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:29.197  ************************************
00:07:29.197  START TEST nvme_hello_world
00:07:29.197  ************************************
00:07:29.197   10:22:21 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0
00:07:29.459  EAL: TSC is not safe to use in SMP mode
00:07:29.459  EAL: TSC is not invariant
00:07:29.459  [2024-12-09 10:22:21.591925] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:29.459  Initializing NVMe Controllers
00:07:29.459  Attaching to 0000:00:10.0
00:07:29.459  Attached to 0000:00:10.0
00:07:29.459    Namespace ID: 1 size: 5GB
00:07:29.459  Initialization complete.
00:07:29.459  INFO: using host memory buffer for IO
00:07:29.459  Hello world!
00:07:29.720  
00:07:29.720  real	0m0.352s
00:07:29.720  user	0m0.012s
00:07:29.720  sys	0m0.339s
00:07:29.720   10:22:21 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.720  ************************************
00:07:29.720  END TEST nvme_hello_world
00:07:29.720  ************************************
00:07:29.720   10:22:21 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x
00:07:29.720   10:22:21 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:29.720   10:22:21 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.720   10:22:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.720   10:22:21 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:29.721  ************************************
00:07:29.721  START TEST nvme_sgl
00:07:29.721  ************************************
00:07:29.721   10:22:21 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl
00:07:29.982  EAL: TSC is not safe to use in SMP mode
00:07:29.982  EAL: TSC is not invariant
00:07:29.982  [2024-12-09 10:22:22.017105] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:29.982  0000:00:10.0: build_io_request_0 Invalid IO length parameter
00:07:29.982  0000:00:10.0: build_io_request_1 Invalid IO length parameter
00:07:29.982  0000:00:10.0: build_io_request_3 Invalid IO length parameter
00:07:29.982  0000:00:10.0: build_io_request_8 Invalid IO length parameter
00:07:29.982  0000:00:10.0: build_io_request_9 Invalid IO length parameter
00:07:29.982  0000:00:10.0: build_io_request_11 Invalid IO length parameter
00:07:29.982  NVMe Readv/Writev Request test
00:07:29.982  Attaching to 0000:00:10.0
00:07:29.982  Attached to 0000:00:10.0
00:07:29.982  0000:00:10.0: build_io_request_2 test passed
00:07:29.982  0000:00:10.0: build_io_request_4 test passed
00:07:29.982  0000:00:10.0: build_io_request_5 test passed
00:07:29.982  0000:00:10.0: build_io_request_6 test passed
00:07:29.982  0000:00:10.0: build_io_request_7 test passed
00:07:29.982  0000:00:10.0: build_io_request_10 test passed
00:07:29.982  Cleaning up...
00:07:29.982  
00:07:29.982  real	0m0.357s
00:07:29.982  user	0m0.027s
00:07:29.982  sys	0m0.329s
00:07:29.982   10:22:22 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:29.982  ************************************
00:07:29.982  END TEST nvme_sgl
00:07:29.982  ************************************
00:07:29.982   10:22:22 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x
00:07:29.982   10:22:22 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:29.982   10:22:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:29.982   10:22:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:29.982   10:22:22 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:29.982  ************************************
00:07:29.982  START TEST nvme_e2edp
00:07:29.982  ************************************
00:07:29.982   10:22:22 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp
00:07:30.553  EAL: TSC is not safe to use in SMP mode
00:07:30.553  EAL: TSC is not invariant
00:07:30.553  [2024-12-09 10:22:22.447909] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:30.553  NVMe Write/Read with End-to-End data protection test
00:07:30.553  Attaching to 0000:00:10.0
00:07:30.553  Attached to 0000:00:10.0
00:07:30.553  Cleaning up...
00:07:30.553  
00:07:30.553  real	0m0.349s
00:07:30.553  user	0m0.015s
00:07:30.553  sys	0m0.333s
00:07:30.554   10:22:22 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.554  ************************************
00:07:30.554  END TEST nvme_e2edp
00:07:30.554  ************************************
00:07:30.554   10:22:22 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x
00:07:30.554   10:22:22 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:30.554   10:22:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.554   10:22:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.554   10:22:22 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:30.554  ************************************
00:07:30.554  START TEST nvme_reserve
00:07:30.554  ************************************
00:07:30.554   10:22:22 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve
00:07:30.815  EAL: TSC is not safe to use in SMP mode
00:07:30.815  EAL: TSC is not invariant
00:07:30.815  [2024-12-09 10:22:22.867257] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:30.815  =====================================================
00:07:30.815  NVMe Controller at PCI bus 0, device 16, function 0
00:07:30.815  =====================================================
00:07:30.815  Reservations:                Not Supported
00:07:30.815  Reservation test passed
00:07:30.815  
00:07:30.815  real	0m0.350s
00:07:30.815  user	0m0.024s
00:07:30.815  sys	0m0.328s
00:07:30.815   10:22:22 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:30.815   10:22:22 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x
00:07:30.815  ************************************
00:07:30.815  END TEST nvme_reserve
00:07:30.815  ************************************
00:07:30.815   10:22:22 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:30.815   10:22:22 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:30.815   10:22:22 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:30.815   10:22:22 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:30.815  ************************************
00:07:30.815  START TEST nvme_err_injection
00:07:30.815  ************************************
00:07:30.815   10:22:22 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection
00:07:31.387  EAL: TSC is not safe to use in SMP mode
00:07:31.387  EAL: TSC is not invariant
00:07:31.387  [2024-12-09 10:22:23.276501] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:31.387  NVMe Error Injection test
00:07:31.387  Attaching to 0000:00:10.0
00:07:31.387  Attached to 0000:00:10.0
00:07:31.387  0000:00:10.0: get features failed as expected
00:07:31.387  0000:00:10.0: get features successfully as expected
00:07:31.387  0000:00:10.0: read failed as expected
00:07:31.387  0000:00:10.0: read successfully as expected
00:07:31.387  Cleaning up...
00:07:31.387  
00:07:31.387  real	0m0.354s
00:07:31.387  user	0m0.013s
00:07:31.387  sys	0m0.339s
00:07:31.387   10:22:23 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:31.387  ************************************
00:07:31.387  END TEST nvme_err_injection
00:07:31.387  ************************************
00:07:31.387   10:22:23 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x
00:07:31.387   10:22:23 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:31.387   10:22:23 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']'
00:07:31.387   10:22:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:31.387   10:22:23 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:31.387  ************************************
00:07:31.387  START TEST nvme_overhead
00:07:31.387  ************************************
00:07:31.387   10:22:23 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0
00:07:31.647  EAL: TSC is not safe to use in SMP mode
00:07:31.648  EAL: TSC is not invariant
00:07:31.648  [2024-12-09 10:22:23.699683] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:32.591  Initializing NVMe Controllers
00:07:32.591  Attaching to 0000:00:10.0
00:07:32.591  Attached to 0000:00:10.0
00:07:32.591  Initialization complete. Launching workers.
00:07:32.591  submit (in ns)   avg, min, max =   8691.9,   6072.3, 249445.5
00:07:32.591  complete (in ns) avg, min, max =   6804.2,   5353.8, 394191.7
00:07:32.591  
00:07:32.591  Submit histogram
00:07:32.591  ================
00:07:32.591         Range in us     Cumulative     Count
00:07:32.591      6.055 -     6.080:    0.0308%  (        1)
00:07:32.591      6.892 -     6.942:    0.0616%  (        1)
00:07:32.591      7.237 -     7.286:    0.2155%  (        5)
00:07:32.591      7.286 -     7.335:    0.4001%  (        6)
00:07:32.591      7.335 -     7.385:    1.0465%  (       21)
00:07:32.591      7.385 -     7.434:    1.8775%  (       27)
00:07:32.591      7.434 -     7.483:    4.0936%  (       72)
00:07:32.591      7.483 -     7.532:    8.5873%  (      146)
00:07:32.591      7.532 -     7.582:   16.1896%  (      247)
00:07:32.591      7.582 -     7.631:   24.2844%  (      263)
00:07:32.591      7.631 -     7.680:   29.2705%  (      162)
00:07:32.591      7.680 -     7.729:   33.1487%  (      126)
00:07:32.591      7.729 -     7.778:   36.0111%  (       93)
00:07:32.591      7.778 -     7.828:   37.7347%  (       56)
00:07:32.591      7.828 -     7.877:   39.6737%  (       63)
00:07:32.591      7.877 -     7.926:   41.5512%  (       61)
00:07:32.591      7.926 -     7.975:   43.0286%  (       48)
00:07:32.591      7.975 -     8.025:   45.0292%  (       65)
00:07:32.591      8.025 -     8.074:   47.0914%  (       67)
00:07:32.591      8.074 -     8.123:   49.0305%  (       63)
00:07:32.591      8.123 -     8.172:   52.4469%  (      111)
00:07:32.591      8.172 -     8.222:   56.5097%  (      132)
00:07:32.591      8.222 -     8.271:   61.5266%  (      163)
00:07:32.591      8.271 -     8.320:   65.8972%  (      142)
00:07:32.591      8.320 -     8.369:   69.8369%  (      128)
00:07:32.591      8.369 -     8.418:   72.6685%  (       92)
00:07:32.591      8.418 -     8.468:   74.7922%  (       69)
00:07:32.591      8.468 -     8.517:   76.2081%  (       46)
00:07:32.591      8.517 -     8.566:   77.1006%  (       29)
00:07:32.591      8.566 -     8.615:   77.9009%  (       26)
00:07:32.591      8.615 -     8.665:   78.3626%  (       15)
00:07:32.591      8.665 -     8.714:   78.7011%  (       11)
00:07:32.591      8.714 -     8.763:   79.0705%  (       12)
00:07:32.591      8.763 -     8.812:   79.3783%  (       10)
00:07:32.591      8.812 -     8.862:   79.7784%  (       13)
00:07:32.591      8.862 -     8.911:   79.9938%  (        7)
00:07:32.591      8.911 -     8.960:   80.6094%  (       20)
00:07:32.591      8.960 -     9.009:   81.4097%  (       26)
00:07:32.591      9.009 -     9.058:   82.0868%  (       22)
00:07:32.591      9.058 -     9.108:   82.6716%  (       19)
00:07:32.591      9.108 -     9.157:   83.3179%  (       21)
00:07:32.591      9.157 -     9.206:   84.0566%  (       24)
00:07:32.591      9.206 -     9.255:   84.8877%  (       27)
00:07:32.591      9.255 -     9.305:   85.6263%  (       24)
00:07:32.591      9.305 -     9.354:   85.9957%  (       12)
00:07:32.591      9.354 -     9.403:   86.7959%  (       26)
00:07:32.591      9.403 -     9.452:   87.2576%  (       15)
00:07:32.591      9.452 -     9.502:   87.7501%  (       16)
00:07:32.591      9.502 -     9.551:   88.3041%  (       18)
00:07:32.591      9.551 -     9.600:   88.8889%  (       19)
00:07:32.591      9.600 -     9.649:   89.1351%  (        8)
00:07:32.591      9.649 -     9.698:   89.6276%  (       16)
00:07:32.591      9.698 -     9.748:   89.8430%  (        7)
00:07:32.591      9.748 -     9.797:   90.0893%  (        8)
00:07:32.591      9.797 -     9.846:   90.3355%  (        8)
00:07:32.591      9.846 -     9.895:   90.7664%  (       14)
00:07:32.591      9.895 -     9.945:   91.1050%  (       11)
00:07:32.591      9.945 -     9.994:   91.3204%  (        7)
00:07:32.591      9.994 -    10.043:   91.5359%  (        7)
00:07:32.591     10.043 -    10.092:   91.8436%  (       10)
00:07:32.591     10.092 -    10.142:   92.1514%  (       10)
00:07:32.591     10.142 -    10.191:   92.3361%  (        6)
00:07:32.591     10.191 -    10.240:   92.4284%  (        3)
00:07:32.591     10.240 -    10.289:   92.7054%  (        9)
00:07:32.591     10.289 -    10.338:   92.8286%  (        4)
00:07:32.591     10.338 -    10.388:   92.8901%  (        2)
00:07:32.591     10.388 -    10.437:   93.0748%  (        6)
00:07:32.591     10.437 -    10.486:   93.1056%  (        1)
00:07:32.591     10.486 -    10.535:   93.1363%  (        1)
00:07:32.591     10.535 -    10.585:   93.1979%  (        2)
00:07:32.591     10.585 -    10.634:   93.4134%  (        7)
00:07:32.592     10.634 -    10.683:   93.5673%  (        5)
00:07:32.592     10.683 -    10.732:   93.7827%  (        7)
00:07:32.592     10.732 -    10.782:   93.8750%  (        3)
00:07:32.592     10.782 -    10.831:   93.9982%  (        4)
00:07:32.592     10.831 -    10.880:   94.1520%  (        5)
00:07:32.592     10.880 -    10.929:   94.4906%  (       11)
00:07:32.592     10.929 -    10.978:   94.6445%  (        5)
00:07:32.592     10.978 -    11.028:   94.8292%  (        6)
00:07:32.592     11.028 -    11.077:   94.9215%  (        3)
00:07:32.592     11.077 -    11.126:   95.0754%  (        5)
00:07:32.592     11.126 -    11.175:   95.1062%  (        1)
00:07:32.592     11.175 -    11.225:   95.4140%  (       10)
00:07:32.592     11.225 -    11.274:   95.5371%  (        4)
00:07:32.592     11.274 -    11.323:   95.6294%  (        3)
00:07:32.592     11.323 -    11.372:   95.6910%  (        2)
00:07:32.592     11.372 -    11.422:   95.7833%  (        3)
00:07:32.592     11.422 -    11.471:   95.8141%  (        1)
00:07:32.592     11.471 -    11.520:   95.9064%  (        3)
00:07:32.592     11.520 -    11.569:   95.9680%  (        2)
00:07:32.592     11.618 -    11.668:   96.0295%  (        2)
00:07:32.592     11.668 -    11.717:   96.0603%  (        1)
00:07:32.592     11.717 -    11.766:   96.0911%  (        1)
00:07:32.592     11.766 -    11.815:   96.1219%  (        1)
00:07:32.592     11.815 -    11.865:   96.1834%  (        2)
00:07:32.592     11.865 -    11.914:   96.2758%  (        3)
00:07:32.592     11.963 -    12.012:   96.3066%  (        1)
00:07:32.592     12.012 -    12.062:   96.3989%  (        3)
00:07:32.592     12.160 -    12.209:   96.4912%  (        3)
00:07:32.592     12.209 -    12.258:   96.5220%  (        1)
00:07:32.592     12.258 -    12.308:   96.5836%  (        2)
00:07:32.592     12.357 -    12.406:   96.6451%  (        2)
00:07:32.592     12.455 -    12.505:   96.7990%  (        5)
00:07:32.592     12.505 -    12.554:   96.8298%  (        1)
00:07:32.592     12.603 -    12.702:   96.8914%  (        2)
00:07:32.592     12.702 -    12.800:   96.9837%  (        3)
00:07:32.592     12.800 -    12.898:   97.0145%  (        1)
00:07:32.592     13.095 -    13.194:   97.0452%  (        1)
00:07:32.592     13.194 -    13.292:   97.0760%  (        1)
00:07:32.592     13.292 -    13.391:   97.1068%  (        1)
00:07:32.592     13.391 -    13.489:   97.1376%  (        1)
00:07:32.592     13.489 -    13.588:   97.1991%  (        2)
00:07:32.592     13.588 -    13.686:   97.2299%  (        1)
00:07:32.592     13.883 -    13.982:   97.2607%  (        1)
00:07:32.592     14.474 -    14.572:   97.3223%  (        2)
00:07:32.592     14.671 -    14.769:   97.3530%  (        1)
00:07:32.592     15.458 -    15.557:   97.3838%  (        1)
00:07:32.592     15.655 -    15.754:   97.4146%  (        1)
00:07:32.592     15.754 -    15.852:   97.4454%  (        1)
00:07:32.592     16.443 -    16.542:   97.4761%  (        1)
00:07:32.592     16.837 -    16.935:   97.5069%  (        1)
00:07:32.592     17.428 -    17.526:   97.5993%  (        3)
00:07:32.592     17.822 -    17.920:   97.6608%  (        2)
00:07:32.592     17.920 -    18.018:   97.7224%  (        2)
00:07:32.592     18.018 -    18.117:   97.7532%  (        1)
00:07:32.592     18.117 -    18.215:   97.8455%  (        3)
00:07:32.592     18.215 -    18.314:   98.0609%  (        7)
00:07:32.592     18.314 -    18.412:   98.3072%  (        8)
00:07:32.592     18.412 -    18.511:   98.5226%  (        7)
00:07:32.592     18.511 -    18.609:   98.6150%  (        3)
00:07:32.592     18.708 -    18.806:   98.6457%  (        1)
00:07:32.592     18.806 -    18.905:   98.6765%  (        1)
00:07:32.592     18.905 -    19.003:   98.7073%  (        1)
00:07:32.592     19.102 -    19.200:   98.7381%  (        1)
00:07:32.592     19.298 -    19.397:   98.8304%  (        3)
00:07:32.592     19.397 -    19.495:   99.0459%  (        7)
00:07:32.592     19.495 -    19.594:   99.1382%  (        3)
00:07:32.592     19.594 -    19.692:   99.2921%  (        5)
00:07:32.592     19.692 -    19.791:   99.4460%  (        5)
00:07:32.592     19.889 -    19.988:   99.4768%  (        1)
00:07:32.592     19.988 -    20.086:   99.5383%  (        2)
00:07:32.592     20.480 -    20.578:   99.5999%  (        2)
00:07:32.592     20.775 -    20.874:   99.6614%  (        2)
00:07:32.592     20.874 -    20.972:   99.6922%  (        1)
00:07:32.592     21.169 -    21.268:   99.7230%  (        1)
00:07:32.592     21.760 -    21.858:   99.7538%  (        1)
00:07:32.592     23.237 -    23.335:   99.7845%  (        1)
00:07:32.592     24.517 -    24.615:   99.8153%  (        1)
00:07:32.592     25.009 -    25.108:   99.8461%  (        1)
00:07:32.592     32.295 -    32.492:   99.8769%  (        1)
00:07:32.592     42.535 -    42.732:   99.9077%  (        1)
00:07:32.592     59.077 -    59.471:   99.9384%  (        1)
00:07:32.592     59.471 -    59.865:   99.9692%  (        1)
00:07:32.592    248.911 -   250.486:  100.0000%  (        1)
00:07:32.592  
00:07:32.592  Complete histogram
00:07:32.592  ==================
00:07:32.592         Range in us     Cumulative     Count
00:07:32.592      5.342 -     5.366:    0.0616%  (        2)
00:07:32.592      5.366 -     5.391:    0.5232%  (       15)
00:07:32.592      5.391 -     5.415:    2.8932%  (       77)
00:07:32.592      5.415 -     5.440:    8.7719%  (      191)
00:07:32.592      5.440 -     5.465:   19.3290%  (      343)
00:07:32.592      5.465 -     5.489:   33.6411%  (      465)
00:07:32.592      5.489 -     5.514:   47.7378%  (      458)
00:07:32.592      5.514 -     5.538:   58.2641%  (      342)
00:07:32.592      5.538 -     5.563:   65.3124%  (      229)
00:07:32.592      5.563 -     5.588:   69.7753%  (      145)
00:07:32.592      5.588 -     5.612:   73.4688%  (      120)
00:07:32.592      5.612 -     5.637:   75.9003%  (       79)
00:07:32.592      5.637 -     5.662:   77.3161%  (       46)
00:07:32.592      5.662 -     5.686:   78.7319%  (       46)
00:07:32.592      5.686 -     5.711:   79.9631%  (       40)
00:07:32.592      5.711 -     5.735:   80.8249%  (       28)
00:07:32.592      5.735 -     5.760:   81.3789%  (       18)
00:07:32.592      5.760 -     5.785:   81.8098%  (       14)
00:07:32.592      5.785 -     5.809:   82.1791%  (       12)
00:07:32.592      5.809 -     5.834:   82.2715%  (        3)
00:07:32.592      5.834 -     5.858:   82.4254%  (        5)
00:07:32.592      5.858 -     5.883:   82.7947%  (       12)
00:07:32.592      5.883 -     5.908:   82.9178%  (        4)
00:07:32.592      5.908 -     5.932:   83.0102%  (        3)
00:07:32.592      5.932 -     5.957:   83.1948%  (        6)
00:07:32.592      5.957 -     5.982:   83.3795%  (        6)
00:07:32.592      5.982 -     6.006:   83.5334%  (        5)
00:07:32.592      6.006 -     6.031:   83.6873%  (        5)
00:07:32.592      6.031 -     6.055:   83.8412%  (        5)
00:07:32.592      6.055 -     6.080:   84.1182%  (        9)
00:07:32.592      6.080 -     6.105:   84.3336%  (        7)
00:07:32.592      6.105 -     6.129:   84.4260%  (        3)
00:07:32.592      6.129 -     6.154:   84.5183%  (        3)
00:07:32.592      6.154 -     6.178:   84.6106%  (        3)
00:07:32.592      6.178 -     6.203:   84.7645%  (        5)
00:07:32.592      6.203 -     6.228:   84.9800%  (        7)
00:07:32.592      6.228 -     6.252:   85.1339%  (        5)
00:07:32.592      6.252 -     6.277:   85.2262%  (        3)
00:07:32.592      6.277 -     6.302:   85.4417%  (        7)
00:07:32.592      6.302 -     6.351:   85.9034%  (       15)
00:07:32.592      6.351 -     6.400:   86.3343%  (       14)
00:07:32.592      6.400 -     6.449:   86.8267%  (       16)
00:07:32.592      6.449 -     6.498:   87.2884%  (       15)
00:07:32.592      6.498 -     6.548:   87.6885%  (       13)
00:07:32.592      6.548 -     6.597:   88.2733%  (       19)
00:07:32.592      6.597 -     6.646:   89.0428%  (       25)
00:07:32.592      6.646 -     6.695:   89.4737%  (       14)
00:07:32.592      6.695 -     6.745:   89.8430%  (       12)
00:07:32.592      6.745 -     6.794:   89.9969%  (        5)
00:07:32.592      6.794 -     6.843:   90.1200%  (        4)
00:07:32.592      6.843 -     6.892:   90.3047%  (        6)
00:07:32.592      6.892 -     6.942:   90.4586%  (        5)
00:07:32.592      6.942 -     6.991:   90.5509%  (        3)
00:07:32.592      6.991 -     7.040:   90.6125%  (        2)
00:07:32.592      7.040 -     7.089:   90.7048%  (        3)
00:07:32.592      7.089 -     7.138:   90.7972%  (        3)
00:07:32.592      7.188 -     7.237:   90.8587%  (        2)
00:07:32.592      7.237 -     7.286:   90.9203%  (        2)
00:07:32.592      7.286 -     7.335:   90.9511%  (        1)
00:07:32.592      7.335 -     7.385:   91.0126%  (        2)
00:07:32.592      7.385 -     7.434:   91.0434%  (        1)
00:07:32.592      7.434 -     7.483:   91.0742%  (        1)
00:07:32.592      7.483 -     7.532:   91.1357%  (        2)
00:07:32.592      7.532 -     7.582:   91.1665%  (        1)
00:07:32.592      7.582 -     7.631:   91.1973%  (        1)
00:07:32.592      7.631 -     7.680:   91.2281%  (        1)
00:07:32.592      7.680 -     7.729:   91.2896%  (        2)
00:07:32.592      7.729 -     7.778:   91.3512%  (        2)
00:07:32.592      7.778 -     7.828:   91.4435%  (        3)
00:07:32.592      7.877 -     7.926:   91.4743%  (        1)
00:07:32.592      7.926 -     7.975:   91.5051%  (        1)
00:07:32.592      8.025 -     8.074:   91.5666%  (        2)
00:07:32.592      8.074 -     8.123:   91.6590%  (        3)
00:07:32.592      8.123 -     8.172:   91.7513%  (        3)
00:07:32.592      8.172 -     8.222:   91.8129%  (        2)
00:07:32.592      8.222 -     8.271:   91.8436%  (        1)
00:07:32.592      8.271 -     8.320:   91.9052%  (        2)
00:07:32.592      8.320 -     8.369:   91.9360%  (        1)
00:07:32.592      8.369 -     8.418:   92.0591%  (        4)
00:07:32.592      8.418 -     8.468:   92.0899%  (        1)
00:07:32.592      8.517 -     8.566:   92.1207%  (        1)
00:07:32.592      8.566 -     8.615:   92.2130%  (        3)
00:07:32.592      8.615 -     8.665:   92.2438%  (        1)
00:07:32.592      8.714 -     8.763:   92.2745%  (        1)
00:07:32.592      8.763 -     8.812:   92.3361%  (        2)
00:07:32.592      8.812 -     8.862:   92.3977%  (        2)
00:07:32.592      8.862 -     8.911:   92.4284%  (        1)
00:07:32.592      8.911 -     8.960:   92.4592%  (        1)
00:07:32.592      9.157 -     9.206:   92.4900%  (        1)
00:07:32.592      9.354 -     9.403:   92.5208%  (        1)
00:07:32.592      9.502 -     9.551:   92.5516%  (        1)
00:07:32.592      9.698 -     9.748:   92.5823%  (        1)
00:07:32.592     11.323 -    11.372:   92.6131%  (        1)
00:07:32.592     12.702 -    12.800:   92.6439%  (        1)
00:07:32.592     12.800 -    12.898:   92.7054%  (        2)
00:07:32.592     12.898 -    12.997:   92.7978%  (        3)
00:07:32.592     12.997 -    13.095:   92.8286%  (        1)
00:07:32.592     13.194 -    13.292:   92.8593%  (        1)
00:07:32.592     13.292 -    13.391:   92.8901%  (        1)
00:07:32.592     13.489 -    13.588:   92.9209%  (        1)
00:07:32.592     13.686 -    13.785:   92.9825%  (        2)
00:07:32.592     13.785 -    13.883:   93.3210%  (       11)
00:07:32.592     13.883 -    13.982:   93.6288%  (       10)
00:07:32.592     13.982 -    14.080:   93.8135%  (        6)
00:07:32.592     14.080 -    14.178:   93.9366%  (        4)
00:07:32.592     14.178 -    14.277:   94.0905%  (        5)
00:07:32.592     14.277 -    14.375:   94.1213%  (        1)
00:07:32.592     14.375 -    14.474:   94.2136%  (        3)
00:07:32.593     14.474 -    14.572:   94.3059%  (        3)
00:07:32.593     14.572 -    14.671:   94.3983%  (        3)
00:07:32.593     14.671 -    14.769:   94.5214%  (        4)
00:07:32.593     14.769 -    14.868:   94.6137%  (        3)
00:07:32.593     14.868 -    14.966:   94.6445%  (        1)
00:07:32.593     14.966 -    15.065:   94.7061%  (        2)
00:07:32.593     15.163 -    15.262:   94.8292%  (        4)
00:07:32.593     15.262 -    15.360:   94.8907%  (        2)
00:07:32.593     15.360 -    15.458:   95.1677%  (        9)
00:07:32.593     15.458 -    15.557:   95.3832%  (        7)
00:07:32.593     15.557 -    15.655:   95.4140%  (        1)
00:07:32.593     15.655 -    15.754:   95.4755%  (        2)
00:07:32.593     15.754 -    15.852:   95.5371%  (        2)
00:07:32.593     15.852 -    15.951:   95.5679%  (        1)
00:07:32.593     15.951 -    16.049:   95.6294%  (        2)
00:07:32.593     16.049 -    16.148:   95.7525%  (        4)
00:07:32.593     16.148 -    16.246:   95.7833%  (        1)
00:07:32.593     16.246 -    16.345:   95.8141%  (        1)
00:07:32.593     16.542 -    16.640:   95.8449%  (        1)
00:07:32.593     16.640 -    16.738:   95.8757%  (        1)
00:07:32.593     17.132 -    17.231:   95.9064%  (        1)
00:07:32.593     17.231 -    17.329:   95.9372%  (        1)
00:07:32.593     17.329 -    17.428:   96.0911%  (        5)
00:07:32.593     17.428 -    17.526:   96.2142%  (        4)
00:07:32.593     17.526 -    17.625:   96.2758%  (        2)
00:07:32.593     17.625 -    17.723:   96.3989%  (        4)
00:07:32.593     17.723 -    17.822:   96.5836%  (        6)
00:07:32.593     17.822 -    17.920:   97.0145%  (       14)
00:07:32.593     17.920 -    18.018:   97.3530%  (       11)
00:07:32.593     18.018 -    18.117:   97.5685%  (        7)
00:07:32.593     18.117 -    18.215:   97.6300%  (        2)
00:07:32.593     18.215 -    18.314:   97.7224%  (        3)
00:07:32.593     18.609 -    18.708:   97.8455%  (        4)
00:07:32.593     18.708 -    18.806:   98.1533%  (       10)
00:07:32.593     18.806 -    18.905:   98.4611%  (       10)
00:07:32.593     18.905 -    19.003:   98.6457%  (        6)
00:07:32.593     19.003 -    19.102:   98.8304%  (        6)
00:07:32.593     19.102 -    19.200:   98.8920%  (        2)
00:07:32.593     19.200 -    19.298:   98.9227%  (        1)
00:07:32.593     19.298 -    19.397:   98.9535%  (        1)
00:07:32.593     19.397 -    19.495:   98.9843%  (        1)
00:07:32.593     19.594 -    19.692:   99.0151%  (        1)
00:07:32.593     19.889 -    19.988:   99.0766%  (        2)
00:07:32.593     19.988 -    20.086:   99.1382%  (        2)
00:07:32.593     20.283 -    20.382:   99.1690%  (        1)
00:07:32.593     20.578 -    20.677:   99.1998%  (        1)
00:07:32.593     20.677 -    20.775:   99.2305%  (        1)
00:07:32.593     20.775 -    20.874:   99.2613%  (        1)
00:07:32.593     21.268 -    21.366:   99.3229%  (        2)
00:07:32.593     22.055 -    22.154:   99.3536%  (        1)
00:07:32.593     22.252 -    22.351:   99.3844%  (        1)
00:07:32.593     23.631 -    23.729:   99.4152%  (        1)
00:07:32.593     25.403 -    25.600:   99.4460%  (        1)
00:07:32.593     25.797 -    25.994:   99.4768%  (        1)
00:07:32.593     26.782 -    26.978:   99.5075%  (        1)
00:07:32.593     27.963 -    28.160:   99.5383%  (        1)
00:07:32.593     28.357 -    28.554:   99.5691%  (        1)
00:07:32.593     29.342 -    29.538:   99.5999%  (        1)
00:07:32.593     29.538 -    29.735:   99.6307%  (        1)
00:07:32.593     30.917 -    31.114:   99.6614%  (        1)
00:07:32.593     31.114 -    31.311:   99.6922%  (        1)
00:07:32.593     32.492 -    32.689:   99.7230%  (        1)
00:07:32.593     33.083 -    33.280:   99.7538%  (        1)
00:07:32.593     33.280 -    33.477:   99.7845%  (        1)
00:07:32.593     33.477 -    33.674:   99.8153%  (        1)
00:07:32.593     35.643 -    35.840:   99.8461%  (        1)
00:07:32.593     45.489 -    45.686:   99.8769%  (        1)
00:07:32.593    156.751 -   157.539:   99.9077%  (        1)
00:07:32.593    169.354 -   170.142:   99.9384%  (        1)
00:07:32.593    226.855 -   228.431:   99.9692%  (        1)
00:07:32.593    393.846 -   395.422:  100.0000%  (        1)
00:07:32.593  
00:07:32.593  
00:07:32.593  real	0m1.351s
00:07:32.593  user	0m1.031s
00:07:32.593  sys	0m0.319s
00:07:32.593   10:22:24 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:32.593  ************************************
00:07:32.593  END TEST nvme_overhead
00:07:32.593  ************************************
00:07:32.593   10:22:24 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x
00:07:32.854   10:22:24 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:32.854   10:22:24 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']'
00:07:32.854   10:22:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:32.854   10:22:24 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:32.854  ************************************
00:07:32.854  START TEST nvme_arbitration
00:07:32.854  ************************************
00:07:32.854   10:22:24 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0
00:07:33.113  EAL: TSC is not safe to use in SMP mode
00:07:33.113  EAL: TSC is not invariant
00:07:33.113  [2024-12-09 10:22:25.126961] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:37.315  Initializing NVMe Controllers
00:07:37.315  Attaching to 0000:00:10.0
00:07:37.315  Attached to 0000:00:10.0
00:07:37.315  Associating QEMU NVMe Ctrl       (12340               ) with lcore 0
00:07:37.315  Associating QEMU NVMe Ctrl       (12340               ) with lcore 1
00:07:37.315  Associating QEMU NVMe Ctrl       (12340               ) with lcore 2
00:07:37.315  Associating QEMU NVMe Ctrl       (12340               ) with lcore 3
00:07:37.315  /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration:
00:07:37.315  /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0
00:07:37.315  Initialization complete. Launching workers.
00:07:37.315  Starting thread on core 1 with urgent priority queue
00:07:37.315  Starting thread on core 2 with urgent priority queue
00:07:37.315  Starting thread on core 3 with urgent priority queue
00:07:37.315  Starting thread on core 0 with urgent priority queue
00:07:37.315  QEMU NVMe Ctrl       (12340               ) core 0:  6943.67 IO/s    14.40 secs/100000 ios
00:07:37.315  QEMU NVMe Ctrl       (12340               ) core 1:  7000.67 IO/s    14.28 secs/100000 ios
00:07:37.315  QEMU NVMe Ctrl       (12340               ) core 2:  6997.00 IO/s    14.29 secs/100000 ios
00:07:37.315  QEMU NVMe Ctrl       (12340               ) core 3:  6921.00 IO/s    14.45 secs/100000 ios
00:07:37.315  ========================================================
00:07:37.315  
00:07:37.315  
00:07:37.315  real	0m4.274s
00:07:37.315  user	0m12.972s
00:07:37.315  sys	0m0.337s
00:07:37.315   10:22:29 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:37.315   10:22:29 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x
00:07:37.315  ************************************
00:07:37.315  END TEST nvme_arbitration
00:07:37.315  ************************************
00:07:37.315   10:22:29 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:37.315   10:22:29 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']'
00:07:37.315   10:22:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:37.315   10:22:29 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:37.315  ************************************
00:07:37.315  START TEST nvme_single_aen
00:07:37.315  ************************************
00:07:37.315   10:22:29 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0
00:07:37.315  EAL: TSC is not safe to use in SMP mode
00:07:37.315  EAL: TSC is not invariant
00:07:37.315  [2024-12-09 10:22:29.467818] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:37.576  Asynchronous Event Request test
00:07:37.576  Attaching to 0000:00:10.0
00:07:37.576  Attached to 0000:00:10.0
00:07:37.576  Reset controller to setup AER completions for this process
00:07:37.576  Registering asynchronous event callbacks...
00:07:37.576  Getting orig temperature thresholds of all controllers
00:07:37.576  0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius)
00:07:37.576  Setting all controllers temperature threshold low to trigger AER
00:07:37.576  Waiting for all controllers temperature threshold to be set lower
00:07:37.576  0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01
00:07:37.576  aer_cb - Resetting Temp Threshold for device: 0000:00:10.0
00:07:37.576  Waiting for all controllers to trigger AER and reset threshold
00:07:37.576  0000:00:10.0: Current Temperature:         323 Kelvin (50 Celsius)
00:07:37.576  Cleaning up...
00:07:37.576  
00:07:37.576  real	0m0.348s
00:07:37.576  user	0m0.007s
00:07:37.576  sys	0m0.340s
00:07:37.576   10:22:29 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:37.576   10:22:29 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x
00:07:37.576  ************************************
00:07:37.576  END TEST nvme_single_aen
00:07:37.576  ************************************
00:07:37.576   10:22:29 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers
00:07:37.576   10:22:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:37.576   10:22:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:37.576   10:22:29 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:37.576  ************************************
00:07:37.576  START TEST nvme_doorbell_aers
00:07:37.576  ************************************
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=()
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs))
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:37.576     10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:37.576     10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:07:37.576    10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}"
00:07:37.576   10:22:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:37.838  EAL: TSC is not safe to use in SMP mode
00:07:37.838  EAL: TSC is not invariant
00:07:37.838  [2024-12-09 10:22:29.932318] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:37.838  Executing: test_write_invalid_db
00:07:37.838  Waiting for AER completion...
00:07:37.838  Asynchronous Event received.
00:07:37.838  Error Information Log Page received.
00:07:37.838  Success: test_write_invalid_db
00:07:37.838  
00:07:37.838  Executing: test_invalid_db_write_overflow_sq
00:07:37.838  Waiting for AER completion...
00:07:37.838  Asynchronous Event received.
00:07:37.838  Error Information Log Page received.
00:07:37.838  Success: test_invalid_db_write_overflow_sq
00:07:37.838  
00:07:37.838  Executing: test_invalid_db_write_overflow_cq
00:07:37.838  Waiting for AER completion...
00:07:37.838  Asynchronous Event received.
00:07:37.838  Error Information Log Page received.
00:07:37.838  Success: test_invalid_db_write_overflow_cq
00:07:37.838  
00:07:37.838  
00:07:37.838  real	0m0.399s
00:07:37.838  user	0m0.044s
00:07:37.838  sys	0m0.364s
00:07:37.838   10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:37.838  ************************************
00:07:37.838  END TEST nvme_doorbell_aers
00:07:37.838  ************************************
00:07:37.838   10:22:29 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x
00:07:38.100    10:22:30 nvme -- nvme/nvme.sh@97 -- # uname
00:07:38.100   10:22:30 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']'
00:07:38.100   10:22:30 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:07:38.100   10:22:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:38.100   10:22:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:38.100   10:22:30 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:38.100  ************************************
00:07:38.100  START TEST bdev_nvme_reset_stuck_adm_cmd
00:07:38.100  ************************************
00:07:38.100   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh
00:07:38.100  * Looking for test storage...
00:07:38.100  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:38.100     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version
00:07:38.100     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-:
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-:
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<'
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:38.100    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:38.100     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:38.361     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:38.361  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:38.361  		--rc genhtml_branch_coverage=1
00:07:38.361  		--rc genhtml_function_coverage=1
00:07:38.361  		--rc genhtml_legend=1
00:07:38.361  		--rc geninfo_all_blocks=1
00:07:38.361  		--rc geninfo_unexecuted_blocks=1
00:07:38.361  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:38.361  		'
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:38.361  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:38.361  		--rc genhtml_branch_coverage=1
00:07:38.361  		--rc genhtml_function_coverage=1
00:07:38.361  		--rc genhtml_legend=1
00:07:38.361  		--rc geninfo_all_blocks=1
00:07:38.361  		--rc geninfo_unexecuted_blocks=1
00:07:38.361  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:38.361  		'
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:38.361  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:38.361  		--rc genhtml_branch_coverage=1
00:07:38.361  		--rc genhtml_function_coverage=1
00:07:38.361  		--rc genhtml_legend=1
00:07:38.361  		--rc geninfo_all_blocks=1
00:07:38.361  		--rc geninfo_unexecuted_blocks=1
00:07:38.361  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:38.361  		'
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:38.361  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:38.361  		--rc genhtml_branch_coverage=1
00:07:38.361  		--rc genhtml_function_coverage=1
00:07:38.361  		--rc genhtml_legend=1
00:07:38.361  		--rc geninfo_all_blocks=1
00:07:38.361  		--rc geninfo_unexecuted_blocks=1
00:07:38.361  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:38.361  		'
00:07:38.361   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0
00:07:38.361   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000
00:07:38.361   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5
00:07:38.361   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0
00:07:38.361   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=()
00:07:38.361    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs
00:07:38.362    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:38.362      10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:38.362      10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:07:38.362     10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:07:38.362    10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']'
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=50771
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 50771
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 50771 ']'
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:38.362  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:38.362   10:22:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:38.362  [2024-12-09 10:22:30.320276] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:38.362  [2024-12-09 10:22:30.320474] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:38.621  EAL: TSC is not safe to use in SMP mode
00:07:38.621  EAL: TSC is not invariant
00:07:38.621  [2024-12-09 10:22:30.639334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4
00:07:38.621  [2024-12-09 10:22:30.671620] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:38.621  [2024-12-09 10:22:30.671652] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:38.621  [2024-12-09 10:22:30.671659] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2].
00:07:38.621  [2024-12-09 10:22:30.671665] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3].
00:07:38.621  [2024-12-09 10:22:30.671882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:38.621  [2024-12-09 10:22:30.672151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2
00:07:38.621  [2024-12-09 10:22:30.672609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3
00:07:38.621  [2024-12-09 10:22:30.672783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:39.238   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:39.238   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0
00:07:39.238   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:39.239  [2024-12-09 10:22:31.215730] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:39.239  nvme0n1
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:39.239    10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:39.239  true
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:39.239    10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1733739751
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=50783
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2
00:07:39.239   10:22:31 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==
00:07:41.772   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:41.773  [2024-12-09 10:22:33.434127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller
00:07:41.773  [2024-12-09 10:22:33.434269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually:
00:07:41.773  [2024-12-09 10:22:33.434282] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0
00:07:41.773  [2024-12-09 10:22:33.434292] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0
00:07:41.773  [2024-12-09 10:22:33.436668] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful.
00:07:41.773  Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 50783
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 50783
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 50783
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]]
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA==
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:07:41.773     10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:07:41.773     10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.bHj0dI
00:07:41.773      10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"'))
00:07:41.773     10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"'
00:07:41.773     10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.MDnJRn
00:07:41.773      10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA==
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 50771
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 50771 ']'
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 50771
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # ps -c -o command 50771
00:07:41.773    10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # tail -1
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:07:41.773  killing process with pid 50771
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 50771'
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 50771
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 50771
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct ))
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout ))
00:07:41.773  
00:07:41.773  real	0m3.573s
00:07:41.773  user	0m12.339s
00:07:41.773  sys	0m0.553s
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:41.773  ************************************
00:07:41.773  END TEST bdev_nvme_reset_stuck_adm_cmd
00:07:41.773  ************************************
00:07:41.773   10:22:33 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x
00:07:41.773   10:22:33 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]]
00:07:41.773   10:22:33 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test
00:07:41.773   10:22:33 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:41.773   10:22:33 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:41.773   10:22:33 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:41.773  ************************************
00:07:41.773  START TEST nvme_fio
00:07:41.773  ************************************
00:07:41.773   10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false
00:07:41.773    10:22:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs
00:07:41.773    10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:41.773    10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs
00:07:41.773    10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:41.773     10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:41.773     10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:41.773    10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:07:41.773    10:22:33 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0')
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}"
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:41.773   10:22:33 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+'
00:07:42.032  EAL: TSC is not safe to use in SMP mode
00:07:42.032  EAL: TSC is not invariant
00:07:42.032  [2024-12-09 10:22:33.986005] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:42.032   10:22:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA'
00:07:42.032   10:22:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0'
00:07:42.290  EAL: TSC is not safe to use in SMP mode
00:07:42.290  EAL: TSC is not invariant
00:07:42.290  [2024-12-09 10:22:34.318167] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:42.290   10:22:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096
00:07:42.290   10:22:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan')
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib=
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}"
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan
00:07:42.290    10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}'
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n '' ]]
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme'
00:07:42.290   10:22:34 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096
00:07:42.290  test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128
00:07:42.290  fio-3.35
00:07:42.548  Starting 1 thread
00:07:42.806  EAL: TSC is not safe to use in SMP mode
00:07:42.806  EAL: TSC is not invariant
00:07:42.806  [2024-12-09 10:22:34.743266] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:49.360  
00:07:49.360  test: (groupid=0, jobs=1): err= 0: pid=101453: Mon Dec  9 10:22:41 2024
00:07:49.360    read: IOPS=58.6k, BW=229MiB/s (240MB/s)(458MiB/2001msec)
00:07:49.360      slat (nsec): min=433, max=21586, avg=517.63, stdev=211.90
00:07:49.360      clat (usec): min=271, max=6458, avg=1091.56, stdev=303.61
00:07:49.360       lat (usec): min=271, max=6458, avg=1092.08, stdev=303.64
00:07:49.360      clat percentiles (usec):
00:07:49.360       |  1.00th=[  685],  5.00th=[  758], 10.00th=[  799], 20.00th=[  865],
00:07:49.360       | 30.00th=[  938], 40.00th=[ 1004], 50.00th=[ 1057], 60.00th=[ 1106],
00:07:49.360       | 70.00th=[ 1172], 80.00th=[ 1254], 90.00th=[ 1369], 95.00th=[ 1532],
00:07:49.360       | 99.00th=[ 2212], 99.50th=[ 2409], 99.90th=[ 3851], 99.95th=[ 4293],
00:07:49.360       | 99.99th=[ 5538]
00:07:49.360     bw (  KiB/s): min=199416, max=258968, per=94.37%, avg=221290.67, stdev=32770.47, samples=3
00:07:49.360     iops        : min=49854, max=64742, avg=55322.67, stdev=8192.62, samples=3
00:07:49.360    write: IOPS=58.5k, BW=229MiB/s (240MB/s)(457MiB/2001msec); 0 zone resets
00:07:49.360      slat (nsec): min=456, max=19723, avg=803.83, stdev=289.55
00:07:49.360      clat (usec): min=261, max=6473, avg=1090.51, stdev=304.72
00:07:49.360       lat (usec): min=268, max=6474, avg=1091.32, stdev=304.72
00:07:49.360      clat percentiles (usec):
00:07:49.360       |  1.00th=[  685],  5.00th=[  758], 10.00th=[  791], 20.00th=[  865],
00:07:49.360       | 30.00th=[  938], 40.00th=[ 1004], 50.00th=[ 1057], 60.00th=[ 1106],
00:07:49.360       | 70.00th=[ 1172], 80.00th=[ 1254], 90.00th=[ 1369], 95.00th=[ 1532],
00:07:49.360       | 99.00th=[ 2212], 99.50th=[ 2409], 99.90th=[ 3982], 99.95th=[ 4359],
00:07:49.360       | 99.99th=[ 5342]
00:07:49.360     bw (  KiB/s): min=197952, max=258208, per=94.14%, avg=220357.33, stdev=32963.85, samples=3
00:07:49.360     iops        : min=49488, max=64552, avg=55089.33, stdev=8240.96, samples=3
00:07:49.360    lat (usec)   : 500=0.36%, 750=4.05%, 1000=35.22%
00:07:49.360    lat (msec)   : 2=58.30%, 4=1.97%, 10=0.09%
00:07:49.360    cpu          : usr=100.00%, sys=0.00%, ctx=24, majf=0, minf=2
00:07:49.360    IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
00:07:49.360       submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
00:07:49.360       complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1%
00:07:49.360       issued rwts: total=117307,117094,0,0 short=0,0,0,0 dropped=0,0,0,0
00:07:49.360       latency   : target=0, window=0, percentile=100.00%, depth=128
00:07:49.360  
00:07:49.360  Run status group 0 (all jobs):
00:07:49.360     READ: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=458MiB (480MB), run=2001-2001msec
00:07:49.360    WRITE: bw=229MiB/s (240MB/s), 229MiB/s-229MiB/s (240MB/s-240MB/s), io=457MiB (480MB), run=2001-2001msec
00:07:49.360   10:22:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true
00:07:49.360   10:22:41 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true
00:07:49.360  
00:07:49.360  real	0m7.835s
00:07:49.360  user	0m6.418s
00:07:49.360  sys	0m1.318s
00:07:49.360   10:22:41 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.360  ************************************
00:07:49.360  END TEST nvme_fio
00:07:49.360  ************************************
00:07:49.360   10:22:41 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x
00:07:49.360  
00:07:49.360  real	0m25.393s
00:07:49.360  user	0m36.163s
00:07:49.360  sys	0m7.043s
00:07:49.360   10:22:41 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.360   10:22:41 nvme -- common/autotest_common.sh@10 -- # set +x
00:07:49.360  ************************************
00:07:49.360  END TEST nvme
00:07:49.360  ************************************
00:07:49.619   10:22:41  -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]]
00:07:49.619   10:22:41  -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:07:49.619   10:22:41  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.619   10:22:41  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.619   10:22:41  -- common/autotest_common.sh@10 -- # set +x
00:07:49.619  ************************************
00:07:49.619  START TEST nvme_scc
00:07:49.619  ************************************
00:07:49.619   10:22:41 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh
00:07:49.619  * Looking for test storage...
00:07:49.619  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:49.619      10:22:41 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version
00:07:49.619      10:22:41 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@336 -- # IFS=.-:
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@337 -- # IFS=.-:
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@338 -- # local 'op=<'
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@344 -- # case "$op" in
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@345 -- # : 1
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@365 -- # decimal 1
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@353 -- # local d=1
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@355 -- # echo 1
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@366 -- # decimal 2
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@353 -- # local d=2
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@355 -- # echo 2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:49.619     10:22:41 nvme_scc -- scripts/common.sh@368 -- # return 0
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:49.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.619  		--rc genhtml_branch_coverage=1
00:07:49.619  		--rc genhtml_function_coverage=1
00:07:49.619  		--rc genhtml_legend=1
00:07:49.619  		--rc geninfo_all_blocks=1
00:07:49.619  		--rc geninfo_unexecuted_blocks=1
00:07:49.619  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.619  		'
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:49.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.619  		--rc genhtml_branch_coverage=1
00:07:49.619  		--rc genhtml_function_coverage=1
00:07:49.619  		--rc genhtml_legend=1
00:07:49.619  		--rc geninfo_all_blocks=1
00:07:49.619  		--rc geninfo_unexecuted_blocks=1
00:07:49.619  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.619  		'
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:49.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.619  		--rc genhtml_branch_coverage=1
00:07:49.619  		--rc genhtml_function_coverage=1
00:07:49.619  		--rc genhtml_legend=1
00:07:49.619  		--rc geninfo_all_blocks=1
00:07:49.619  		--rc geninfo_unexecuted_blocks=1
00:07:49.619  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.619  		'
00:07:49.619     10:22:41 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:49.619  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.619  		--rc genhtml_branch_coverage=1
00:07:49.619  		--rc genhtml_function_coverage=1
00:07:49.619  		--rc genhtml_legend=1
00:07:49.619  		--rc geninfo_all_blocks=1
00:07:49.619  		--rc geninfo_unexecuted_blocks=1
00:07:49.619  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.619  		'
00:07:49.619    10:22:41 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:07:49.619       10:22:41 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh
00:07:49.619      10:22:41 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../
00:07:49.619     10:22:41 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk
00:07:49.619     10:22:41 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]]
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]]
00:07:49.619      10:22:41 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh
00:07:49.620       10:22:41 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:07:49.620       10:22:41 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:07:49.620       10:22:41 nvme_scc -- paths/export.sh@4 -- # export PATH
00:07:49.620       10:22:41 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@10 -- # ctrls=()
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@11 -- # nvmes=()
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@12 -- # bdfs=()
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=()
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls
00:07:49.620     10:22:41 nvme_scc -- nvme/functions.sh@14 -- # nvme_name=
00:07:49.620    10:22:41 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:49.620    10:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname
00:07:49.620   10:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]]
00:07:49.620   10:22:41 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0
00:07:49.620  
00:07:49.620  real	0m0.164s
00:07:49.620  user	0m0.153s
00:07:49.620  sys	0m0.052s
00:07:49.620   10:22:41 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:49.620   10:22:41 nvme_scc -- common/autotest_common.sh@10 -- # set +x
00:07:49.620  ************************************
00:07:49.620  END TEST nvme_scc
00:07:49.620  ************************************
00:07:49.620   10:22:41  -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]]
00:07:49.620   10:22:41  -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]]
00:07:49.620   10:22:41  -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]]
00:07:49.620   10:22:41  -- spdk/autotest.sh@228 -- # [[ 0 -eq 1 ]]
00:07:49.620   10:22:41  -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]]
00:07:49.620   10:22:41  -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:07:49.620   10:22:41  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:49.620   10:22:41  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:49.620   10:22:41  -- common/autotest_common.sh@10 -- # set +x
00:07:49.620  ************************************
00:07:49.620  START TEST nvme_rpc
00:07:49.620  ************************************
00:07:49.620   10:22:41 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh
00:07:49.878  * Looking for test storage...
00:07:49.878  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:49.878    10:22:41 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:49.878     10:22:41 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:49.878     10:22:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version
00:07:49.878    10:22:41 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-:
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-:
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<'
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@345 -- # : 1
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@365 -- # decimal 1
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@353 -- # local d=1
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@355 -- # echo 1
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@366 -- # decimal 2
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@353 -- # local d=2
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:49.878     10:22:41 nvme_rpc -- scripts/common.sh@355 -- # echo 2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:49.878    10:22:41 nvme_rpc -- scripts/common.sh@368 -- # return 0
00:07:49.878    10:22:41 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:49.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.879  		--rc genhtml_branch_coverage=1
00:07:49.879  		--rc genhtml_function_coverage=1
00:07:49.879  		--rc genhtml_legend=1
00:07:49.879  		--rc geninfo_all_blocks=1
00:07:49.879  		--rc geninfo_unexecuted_blocks=1
00:07:49.879  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.879  		'
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:49.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.879  		--rc genhtml_branch_coverage=1
00:07:49.879  		--rc genhtml_function_coverage=1
00:07:49.879  		--rc genhtml_legend=1
00:07:49.879  		--rc geninfo_all_blocks=1
00:07:49.879  		--rc geninfo_unexecuted_blocks=1
00:07:49.879  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.879  		'
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:49.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.879  		--rc genhtml_branch_coverage=1
00:07:49.879  		--rc genhtml_function_coverage=1
00:07:49.879  		--rc genhtml_legend=1
00:07:49.879  		--rc geninfo_all_blocks=1
00:07:49.879  		--rc geninfo_unexecuted_blocks=1
00:07:49.879  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.879  		'
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:49.879  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:49.879  		--rc genhtml_branch_coverage=1
00:07:49.879  		--rc genhtml_function_coverage=1
00:07:49.879  		--rc genhtml_legend=1
00:07:49.879  		--rc geninfo_all_blocks=1
00:07:49.879  		--rc geninfo_unexecuted_blocks=1
00:07:49.879  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:49.879  		'
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:49.879    10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=()
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs))
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=()
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr'))
00:07:49.879      10:22:41 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr'
00:07:49.879      10:22:41 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 1 == 0 ))
00:07:49.879     10:22:41 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0
00:07:49.879    10:22:41 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=51033
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT
00:07:49.879   10:22:41 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 51033
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 51033 ']'
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:49.879  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:49.879   10:22:41 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:49.879  [2024-12-09 10:22:41.939631] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:49.879  [2024-12-09 10:22:41.939779] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:50.137  EAL: TSC is not safe to use in SMP mode
00:07:50.137  EAL: TSC is not invariant
00:07:50.137  [2024-12-09 10:22:42.245425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:50.137  [2024-12-09 10:22:42.271273] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:50.137  [2024-12-09 10:22:42.271305] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:50.137  [2024-12-09 10:22:42.271633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:50.137  [2024-12-09 10:22:42.271503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:50.703   10:22:42 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:50.703   10:22:42 nvme_rpc -- common/autotest_common.sh@868 -- # return 0
00:07:50.703   10:22:42 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0
00:07:50.961  [2024-12-09 10:22:42.884734] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation
00:07:50.961  Nvme0n1
00:07:50.961   10:22:42 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']'
00:07:50.961   10:22:42 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1
00:07:50.961  request:
00:07:50.961  {
00:07:50.961    "bdev_name": "Nvme0n1",
00:07:50.961    "filename": "non_existing_file",
00:07:50.961    "method": "bdev_nvme_apply_firmware",
00:07:50.961    "req_id": 1
00:07:50.961  }
00:07:50.961  Got JSON-RPC error response
00:07:50.961  response:
00:07:50.961  {
00:07:50.961    "code": -32603,
00:07:50.961    "message": "open file failed."
00:07:50.961  }
00:07:51.218   10:22:43 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1
00:07:51.218   10:22:43 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']'
00:07:51.218   10:22:43 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0
00:07:51.218   10:22:43 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT
00:07:51.218   10:22:43 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 51033
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 51033 ']'
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 51033
00:07:51.218    10:22:43 nvme_rpc -- common/autotest_common.sh@959 -- # uname
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:07:51.218    10:22:43 nvme_rpc -- common/autotest_common.sh@962 -- # ps -c -o command 51033
00:07:51.218    10:22:43 nvme_rpc -- common/autotest_common.sh@962 -- # tail -1
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:07:51.218  killing process with pid 51033
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51033'
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@973 -- # kill 51033
00:07:51.218   10:22:43 nvme_rpc -- common/autotest_common.sh@978 -- # wait 51033
00:07:51.476  
00:07:51.476  real	0m1.706s
00:07:51.476  user	0m3.217s
00:07:51.476  sys	0m0.454s
00:07:51.476  ************************************
00:07:51.476  END TEST nvme_rpc
00:07:51.476  ************************************
00:07:51.476   10:22:43 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:51.476   10:22:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x
00:07:51.476   10:22:43  -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:07:51.476   10:22:43  -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']'
00:07:51.476   10:22:43  -- common/autotest_common.sh@1111 -- # xtrace_disable
00:07:51.476   10:22:43  -- common/autotest_common.sh@10 -- # set +x
00:07:51.476  ************************************
00:07:51.476  START TEST nvme_rpc_timeouts
00:07:51.476  ************************************
00:07:51.476   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh
00:07:51.476  * Looking for test storage...
00:07:51.476  * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme
00:07:51.476    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]]
00:07:51.476     10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version
00:07:51.476     10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-:
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-:
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 ))
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) ))
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]]
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]]
00:07:51.735     10:22:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] ))
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] ))
00:07:51.735    10:22:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS=
00:07:51.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:51.735  		--rc genhtml_branch_coverage=1
00:07:51.735  		--rc genhtml_function_coverage=1
00:07:51.735  		--rc genhtml_legend=1
00:07:51.735  		--rc geninfo_all_blocks=1
00:07:51.735  		--rc geninfo_unexecuted_blocks=1
00:07:51.735  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:51.735  		'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS='
00:07:51.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:51.735  		--rc genhtml_branch_coverage=1
00:07:51.735  		--rc genhtml_function_coverage=1
00:07:51.735  		--rc genhtml_legend=1
00:07:51.735  		--rc geninfo_all_blocks=1
00:07:51.735  		--rc geninfo_unexecuted_blocks=1
00:07:51.735  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:51.735  		'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 
00:07:51.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:51.735  		--rc genhtml_branch_coverage=1
00:07:51.735  		--rc genhtml_function_coverage=1
00:07:51.735  		--rc genhtml_legend=1
00:07:51.735  		--rc geninfo_all_blocks=1
00:07:51.735  		--rc geninfo_unexecuted_blocks=1
00:07:51.735  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:51.735  		'
00:07:51.735    10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 
00:07:51.735  		--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1
00:07:51.735  		--rc genhtml_branch_coverage=1
00:07:51.735  		--rc genhtml_function_coverage=1
00:07:51.735  		--rc genhtml_legend=1
00:07:51.735  		--rc geninfo_all_blocks=1
00:07:51.735  		--rc geninfo_unexecuted_blocks=1
00:07:51.735  		--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh
00:07:51.735  		'
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_51070
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_51070
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=51106
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT
00:07:51.735   10:22:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 51106
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 51106 ']'
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100
00:07:51.735  Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...'
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable
00:07:51.735   10:22:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:07:51.735  [2024-12-09 10:22:43.655715] Starting SPDK v25.01-pre git sha1 51286f61a / DPDK 24.03.0 initialization...
00:07:51.735  [2024-12-09 10:22:43.655867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ]
00:07:51.994  EAL: TSC is not safe to use in SMP mode
00:07:51.994  EAL: TSC is not invariant
00:07:51.994  [2024-12-09 10:22:43.956091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2
00:07:51.994  [2024-12-09 10:22:43.983555] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0].
00:07:51.994  [2024-12-09 10:22:43.983590] app.c: 952:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1].
00:07:51.994  [2024-12-09 10:22:43.983764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0
00:07:51.994  [2024-12-09 10:22:43.983721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1
00:07:52.559   10:22:44 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 ))
00:07:52.559   10:22:44 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0
00:07:52.559  Checking default timeout settings:
00:07:52.559   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings:
00:07:52.560   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:07:52.817  Making settings changes with rpc:
00:07:52.817   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc:
00:07:52.817   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort
00:07:52.817  Check default vs. modified settings:
00:07:52.817   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings:
00:07:52.818   10:22:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']'
00:07:53.079  Setting action_on_timeout is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']'
00:07:53.079  Setting timeout_us is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_51070
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']'
00:07:53.079  Setting timeout_admin_us is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected.
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_51070 /tmp/settings_modified_51070
00:07:53.079   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 51106
00:07:53.079   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 51106 ']'
00:07:53.079   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 51106
00:07:53.079    10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname
00:07:53.079   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' FreeBSD = Linux ']'
00:07:53.079    10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # ps -c -o command 51106
00:07:53.079    10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # tail -1
00:07:53.079   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # process_name=spdk_tgt
00:07:53.079   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' spdk_tgt = sudo ']'
00:07:53.080  killing process with pid 51106
00:07:53.080   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 51106'
00:07:53.080   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 51106
00:07:53.080   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 51106
00:07:53.339  RPC TIMEOUT SETTING TEST PASSED.
00:07:53.339   10:22:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED.
00:07:53.339  
00:07:53.339  real	0m1.864s
00:07:53.339  user	0m3.559s
00:07:53.339  sys	0m0.549s
00:07:53.339   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable
00:07:53.339   10:22:45 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x
00:07:53.339  ************************************
00:07:53.339  END TEST nvme_rpc_timeouts
00:07:53.339  ************************************
00:07:53.339    10:22:45  -- spdk/autotest.sh@239 -- # uname -s
00:07:53.339   10:22:45  -- spdk/autotest.sh@239 -- # '[' FreeBSD = Linux ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@243 -- # [[ 0 -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@260 -- # timing_exit lib
00:07:53.339   10:22:45  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:53.339   10:22:45  -- common/autotest_common.sh@10 -- # set +x
00:07:53.339   10:22:45  -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']'
00:07:53.339   10:22:45  -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]]
00:07:53.339   10:22:45  -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT
00:07:53.339   10:22:45  -- spdk/autotest.sh@387 -- # timing_enter post_cleanup
00:07:53.339   10:22:45  -- common/autotest_common.sh@726 -- # xtrace_disable
00:07:53.339   10:22:45  -- common/autotest_common.sh@10 -- # set +x
00:07:53.339   10:22:45  -- spdk/autotest.sh@388 -- # autotest_cleanup
00:07:53.339   10:22:45  -- common/autotest_common.sh@1396 -- # local autotest_es=0
00:07:53.339   10:22:45  -- common/autotest_common.sh@1397 -- # xtrace_disable
00:07:53.339   10:22:45  -- common/autotest_common.sh@10 -- # set +x
00:07:54.272  setup.sh cleanup function not yet supported on FreeBSD
00:07:54.272   10:22:46  -- common/autotest_common.sh@1453 -- # return 0
00:07:54.272   10:22:46  -- spdk/autotest.sh@389 -- # timing_exit post_cleanup
00:07:54.272   10:22:46  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:54.272   10:22:46  -- common/autotest_common.sh@10 -- # set +x
00:07:54.272   10:22:46  -- spdk/autotest.sh@391 -- # timing_exit autotest
00:07:54.272   10:22:46  -- common/autotest_common.sh@732 -- # xtrace_disable
00:07:54.272   10:22:46  -- common/autotest_common.sh@10 -- # set +x
00:07:54.272   10:22:46  -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:07:54.272   10:22:46  -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]]
00:07:54.272   10:22:46  -- spdk/autotest.sh@396 -- # [[ y == y ]]
00:07:54.272    10:22:46  -- spdk/autotest.sh@398 -- # hostname
00:07:54.272   10:22:46  -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t freebsd-cloud-1725281765-2372.local -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info
00:07:54.530  geninfo: WARNING: invalid characters removed from testname!
00:07:57.059  geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvmf/mdns_server.gcda
00:08:03.615   10:22:54  -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:08.875   10:23:00  -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:10.773   10:23:02  -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:13.316   10:23:05  -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:16.594   10:23:08  -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:18.797   10:23:10  -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info
00:08:21.325   10:23:13  -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR
00:08:21.325   10:23:13  -- spdk/autorun.sh@1 -- $ timing_finish
00:08:21.325   10:23:13  -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]]
00:08:21.325   10:23:13  -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl
00:08:21.325   10:23:13  -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]]
00:08:21.325   10:23:13  -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt
00:08:21.583  + [[ -n 1306 ]]
00:08:21.583  + sudo kill 1306
00:08:21.591  [Pipeline] }
00:08:21.607  [Pipeline] // timeout
00:08:21.613  [Pipeline] }
00:08:21.627  [Pipeline] // stage
00:08:21.633  [Pipeline] }
00:08:21.647  [Pipeline] // catchError
00:08:21.657  [Pipeline] stage
00:08:21.660  [Pipeline] { (Stop VM)
00:08:21.672  [Pipeline] sh
00:08:21.951  + vagrant halt
00:08:24.479  ==> default: Halting domain...
00:08:51.040  [Pipeline] sh
00:08:51.321  + vagrant destroy -f
00:08:53.859  ==> default: Removing domain...
00:08:53.872  [Pipeline] sh
00:08:54.152  + mv output /var/jenkins/workspace/freebsd-vg-autotest/output
00:08:54.161  [Pipeline] }
00:08:54.176  [Pipeline] // stage
00:08:54.182  [Pipeline] }
00:08:54.198  [Pipeline] // dir
00:08:54.204  [Pipeline] }
00:08:54.221  [Pipeline] // wrap
00:08:54.227  [Pipeline] }
00:08:54.240  [Pipeline] // catchError
00:08:54.249  [Pipeline] stage
00:08:54.251  [Pipeline] { (Epilogue)
00:08:54.265  [Pipeline] sh
00:08:54.545  + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh
00:09:09.505  [Pipeline] catchError
00:09:09.508  [Pipeline] {
00:09:09.521  [Pipeline] sh
00:09:09.794  + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh
00:09:09.794  Artifacts sizes are good
00:09:09.802  [Pipeline] }
00:09:09.816  [Pipeline] // catchError
00:09:09.826  [Pipeline] archiveArtifacts
00:09:09.833  Archiving artifacts
00:09:10.081  [Pipeline] cleanWs
00:09:10.091  [WS-CLEANUP] Deleting project workspace...
00:09:10.091  [WS-CLEANUP] Deferred wipeout is used...
00:09:10.097  [WS-CLEANUP] done
00:09:10.098  [Pipeline] }
00:09:10.116  [Pipeline] // stage
00:09:10.121  [Pipeline] }
00:09:10.133  [Pipeline] // node
00:09:10.136  [Pipeline] End of Pipeline
00:09:10.165  Finished: SUCCESS